OpenClaw Memory System Explained: 4 Layers You Must Know
OpenClaw Memory System Explained: 4 Layers You Must Know

Let me be straightforward about something before we dive in: most AI agent memory systems are garbage. Not "needs a little tuning" garbage — fundamentally, architecturally broken garbage. You build an agent, it works great for five messages, and by message twenty it's either hallucinating, repeating itself, forgetting critical information, or costing you a small fortune in token fees. Sometimes all four at once.
I've watched people in every AI builder community go through the same painful cycle. They start with a simple conversation buffer, realize it doesn't scale, bolt on a vector store, discover that vector search returns irrelevant nonsense half the time, try summarization, watch the model compress "I have a life-threatening peanut allergy" into "user has food preferences," and then throw their laptop out the window.
OpenClaw solves this problem — and it does it with a layered memory architecture that actually makes sense once you understand how the pieces fit together. But here's the thing: most people using OpenClaw don't fully grasp the memory system, so they either underuse it or configure it in ways that create the exact same problems they were trying to escape.
This post is going to fix that. We're going to walk through all four layers of OpenClaw's memory system, explain why each exists, show you how to configure them properly, and make sure you never build another agent that forgets your name mid-conversation.
The Core Problem: Why Memory Is So Hard
Before we get into OpenClaw's solution, let's be precise about the problem. AI agents need memory for the same reason you do — to maintain continuity, learn from experience, and act on accumulated knowledge. But LLMs are fundamentally stateless. Every API call starts from scratch. Whatever "memory" your agent has is really just text you're stuffing into the context window before each request.
This creates a set of cascading problems:
Token bloat. If you dump everything in, you burn through context limits and money fast. A GPT-4 class model at 40k tokens per request adds up to real dollars within hours of active use.
Signal loss. If you try to compress with summarization, critical details get destroyed. The model decides what's "important" and it frequently decides wrong.
Retrieval noise. If you use vector similarity search to pull in relevant past memories, you get results that are semantically adjacent but contextually wrong. You ask about your trip to Japan, and the model retrieves notes from your trip to Italy because the embeddings for "international travel planning" are close together.
State amnesia. Most frameworks treat memory as disposable. Restart the process, lose the memory. Switch threads, start over. This makes building anything persistent feel like building on quicksand.
OpenClaw's memory system was designed from the ground up to address all four of these failure modes simultaneously. It does this through four distinct layers, each handling a different time horizon and type of information.
Layer 1: The Buffer (Immediate Context)
The buffer is the simplest layer, and if you've built any kind of chatbot before, you've used something like it. It holds the most recent messages in the current conversation — typically the last N turns or the last N tokens worth of interaction.
In OpenClaw, the buffer is configured in your agent's memory block:
memory:
buffer:
max_turns: 12
max_tokens: 4000
strategy: sliding_window
This is straightforward. The sliding_window strategy keeps the most recent turns and drops the oldest ones as you exceed the limit. OpenClaw also supports token_priority, which keeps turns that contain more information-dense content (measured by entity count and semantic weight) over turns that are mostly filler.
memory:
buffer:
max_turns: 16
max_tokens: 6000
strategy: token_priority
priority_signals:
- entity_density
- user_instruction
- factual_content
The user_instruction signal is particularly useful. It tells the buffer to preferentially retain turns where the user gave explicit directives ("always do X," "never do Y," "remember that Z"). These are the messages most likely to cause problems if they get dropped.
When to use which strategy: For casual conversational agents, sliding_window is fine. For agents that handle tasks with specific requirements — project management, customer support, personal assistants — use token_priority. The overhead is minimal, and it prevents the most common class of "you literally just told me that" errors.
The buffer alone isn't memory in any meaningful sense. It's a short-term scratchpad. The magic starts when it interacts with the other three layers.
Layer 2: The Digest (Session Summaries With Teeth)
This is where OpenClaw diverges from the lazy summarization that plagues other frameworks. Instead of asking the LLM to "summarize the conversation," OpenClaw runs a structured extraction process at configurable intervals — typically every N turns or when the buffer is about to overflow.
The digest layer doesn't produce a paragraph of vague summary. It produces structured memory objects:
memory:
digest:
trigger: every_8_turns
extraction_schema:
- facts: "Explicit factual statements made by the user"
- preferences: "Stated likes, dislikes, requirements, constraints"
- decisions: "Choices made or agreements reached"
- open_items: "Unresolved questions or pending tasks"
- corrections: "Anything the user corrected or clarified"
Here's what a digest extraction actually looks like in practice. Say a user had this exchange with a meal planning agent:
User: I'm planning meals for next week. I'm vegetarian, but I eat eggs and dairy. My partner is vegan. We both hate cilantro. Budget is about $80.
A typical summarization model would produce: "User is planning meals for the week with dietary restrictions and a budget."
OpenClaw's digest produces:
{
"facts": [
"Planning meals for next week",
"Budget: approximately $80"
],
"preferences": [
"User: vegetarian (lacto-ovo)",
"Partner: vegan",
"Both: dislike cilantro"
],
"decisions": [],
"open_items": [
"No specific meals selected yet",
"Number of meals not specified"
],
"corrections": []
}
The difference is night and day. Every critical detail is preserved in a structured format that's easy to retrieve, easy to validate, and — crucially — easy to update. When the user says "actually, my partner started eating fish again," the digest layer knows to update the specific preference record for the partner rather than appending a contradictory summary on top of the old one.
The corrections field is particularly powerful. It creates an explicit record of things the agent got wrong or the user changed their mind about. This feeds into the retrieval layer (Layer 4) to down-weight outdated information.
memory:
digest:
trigger: every_8_turns
on_correction:
action: update_in_place
preserve_history: true
deprecation_flag: true
With update_in_place enabled and preserve_history on, OpenClaw will update the active memory record while keeping a versioned history. The old version gets a deprecation flag so it won't surface in retrieval unless explicitly queried. This is how you solve the "agent keeps using outdated facts" problem that drives everyone crazy.
Layer 3: The Vault (Persistent Long-Term Storage)
The vault is where memories go to live permanently. While the buffer holds minutes of context and the digest holds hours to days, the vault holds weeks, months, and potentially years of accumulated knowledge about users, projects, and domains.
OpenClaw's vault uses a hybrid storage approach — and this is the part that most people misconfigure.
memory:
vault:
storage_backend: hybrid
vector_store:
provider: default
embedding_model: openclaw-embed-v2
dimensions: 768
graph_store:
enabled: true
entity_extraction: auto
relationship_types:
- "has_preference"
- "works_on"
- "is_related_to"
- "contradicts"
- "supersedes"
metadata:
track_recency: true
track_access_frequency: true
track_importance: true
importance_scorer: llm
The hybrid approach combines vector embeddings (good for semantic similarity) with a knowledge graph (good for structured relationships). When you store a memory like "User is allergic to peanuts," it gets both a vector embedding for semantic search AND a graph entry linking the user entity to "peanut allergy" via a "has_condition" relationship.
Why does this matter? Because pure vector search is what produces those terrible "I asked about Japan and got Italy" retrievals. When you also have the graph, OpenClaw can traverse relationships: "User → planning trip → Japan → needs visa info" stays cleanly separated from "User → past trip → Italy → hotel reviews."
The metadata tracking is equally important. Every memory in the vault carries three scores:
- Recency: When was this memory last created or updated?
- Access frequency: How often has this memory been retrieved and used?
- Importance: How critical is this memory? (Scored by an LLM call during storage.)
These three scores combine during retrieval to produce a composite relevance score. A memory about a life-threatening allergy might have moderate recency (mentioned weeks ago), low access frequency (only comes up in food contexts), but extremely high importance — so it still surfaces when needed.
# How vault scoring works conceptually
composite_score = (
weights["semantic_similarity"] * vector_score +
weights["recency"] * recency_score +
weights["importance"] * importance_score +
weights["access_frequency"] * frequency_score
)
You can tune the weights in your config:
memory:
vault:
retrieval_weights:
semantic_similarity: 0.35
recency: 0.25
importance: 0.30
access_frequency: 0.10
For most use cases, these default weights work well. If you're building something where recent context matters more (like a news analysis agent), bump recency. For safety-critical applications (medical, financial), bump importance.
Layer 4: The Lens (Retrieval Orchestration)
The lens is what ties everything together, and it's the layer that most people don't even realize exists. It's not a storage layer — it's the retrieval and assembly layer. It decides what memories from the other three layers actually get injected into the prompt for each request.
Think of it this way: the buffer, digest, and vault are all sources of memory. The lens is the librarian who decides which books to pull off the shelf for this specific question.
memory:
lens:
context_budget: 3000 # max tokens allocated to memory in each prompt
allocation:
buffer: 0.40 # 40% to recent conversation
digest: 0.25 # 25% to structured session facts
vault: 0.35 # 35% to long-term memories
retrieval:
vault_top_k: 5
deduplication: true
contradiction_check: true
temporal_filter:
enabled: true
prefer_recent: true
decay_rate: 0.05
The context_budget is critical. This is the total number of tokens the lens is allowed to spend on memory injection. By capping this, you prevent the token bloat problem entirely. The agent will never spend more than 3000 tokens on memory, no matter how much history exists.
The allocation split determines how that budget is divided. The 40/25/35 default split works for most conversational agents, but you should adjust based on your use case:
- Task execution agents (following multi-step instructions): bump buffer to 50-60%, reduce vault to 20%
- Personal assistant agents (long-running, relationship-heavy): bump vault to 45-50%, reduce buffer to 25-30%
- Research agents (accumulating knowledge over time): bump vault to 50-55%, bump digest to 30%
The contradiction_check feature is subtle but incredibly valuable. Before injecting memories into the prompt, the lens checks for contradictions between the vault, digest, and buffer. If the buffer says "user just switched to vegan" but the vault still says "user is vegetarian," the lens flags this, prioritizes the more recent information, and triggers a vault update.
memory:
lens:
contradiction_resolution:
strategy: prefer_recent
auto_update_vault: true
notify_agent: true # adds a system note about the contradiction
With notify_agent enabled, the agent actually receives a note in its system prompt saying something like: "Note: User's dietary status was updated from 'vegetarian' to 'vegan' based on recent conversation. Previous records have been updated." This helps the agent handle the transition gracefully rather than awkwardly pretending the change didn't happen.
Putting It All Together
Here's what a complete memory configuration looks like for a personal project management agent — the kind that tracks your freelance clients, deadlines, and preferences across weeks of interaction:
agent:
name: project_assistant
memory:
buffer:
max_turns: 14
max_tokens: 5000
strategy: token_priority
priority_signals:
- user_instruction
- entity_density
- deadline_mention
digest:
trigger: every_10_turns
extraction_schema:
- facts
- preferences
- decisions
- open_items
- corrections
- deadlines
on_correction:
action: update_in_place
preserve_history: true
vault:
storage_backend: hybrid
vector_store:
provider: default
embedding_model: openclaw-embed-v2
graph_store:
enabled: true
entity_extraction: auto
retrieval_weights:
semantic_similarity: 0.30
recency: 0.30
importance: 0.30
access_frequency: 0.10
lens:
context_budget: 4000
allocation:
buffer: 0.35
digest: 0.30
vault: 0.35
retrieval:
vault_top_k: 6
deduplication: true
contradiction_check: true
With this configuration, your agent will remember that Client A always wants revisions in Google Docs format, Client B has a NET-30 payment policy you need to track, your hourly rate went up last month, and the project deadline got pushed from Friday to next Wednesday. And it'll do all of this without blowing up your token budget or retrieving irrelevant memories about a different client when you're focused on a specific one.
The Fast Path: Skip the Configuration Headaches
I spent a lot of time getting my memory configs right through trial and error. If you don't want to set all of this up manually and tune the weights yourself, Felix's OpenClaw Starter Pack on Claw Mart includes pre-configured memory setups for the most common agent types — personal assistants, task managers, research agents, and more. It's $29, includes pre-built skills that handle the digest extraction schema and vault scoring weights out of the box, and honestly would have saved me a solid weekend of tweaking YAML files and debugging why my retrieval weights were producing garbage results. If you're just getting started with OpenClaw and want to skip the "why is my agent forgetting everything" phase, it's the fastest path to a working setup.
What Most People Get Wrong
After helping a bunch of people set up OpenClaw agents, here are the three most common mistakes:
1. They skip the graph store. Running with vector-only vault storage because it's simpler to configure. This works fine for simple chatbots but falls apart immediately for agents that deal with multiple entities (clients, projects, people, locations). Turn on the graph store. The overhead is minimal and the retrieval quality improvement is dramatic.
2. They set the context budget too high. People think "more memory = better agent." Wrong. More memory = more noise, more confusion, higher cost. A well-curated 3000 tokens of memory outperforms a sloppy 12000 tokens every single time. Trust the lens to pick the right memories and keep the budget tight.
3. They don't use the corrections field in the digest. This is the single most impactful feature for agents that run over long time horizons. Without it, outdated information accumulates in the vault and the agent becomes confidently wrong about things that changed weeks ago.
Next Steps
If you're already running an OpenClaw agent with default memory settings, here's what to do right now:
- Enable the graph store in your vault config if it's not already on.
- Add
correctionsto your digest extraction schema. This single change will prevent the most embarrassing class of memory errors. - Set an explicit context budget in your lens config. If you haven't set one, the lens may be injecting way more memory than your agent needs.
- Check your retrieval weights. The defaults are reasonable, but if your agent handles safety-critical information (allergies, medications, financial data), bump the importance weight to 0.35-0.40.
- Test with a multi-session scenario. Have a conversation, restart, come back and reference something from the first conversation. If your agent doesn't remember, your vault persistence isn't configured correctly.
Memory is the difference between a demo and a product. An agent that forgets what you told it yesterday is a toy. An agent that builds a reliable, updatable, accurately retrievable model of your world over weeks and months — that's genuinely useful software. OpenClaw gives you the architecture to build the latter. Now you know how to use it.