- Context compaction — automatically summarizes conversation history when the context window fills up, preserving recent turns and discarding older ones.
- Semantic memory — indexes discarded messages so the agent can retrieve past context on demand via the
memory_searchtool.
Feature flags
Both systems require compile-time features and per-request enablement.| Feature flag | Cargo feature | What it enables |
|---|---|---|
| Compaction strategy | session-compaction | DefaultCompactor in meerkat-session |
| Memory store backend | memory-store | meerkat-store/memory backend |
| Memory + compaction wiring | memory-store-session | HnswMemoryStore, MemorySearchDispatcher, agent loop integration |
The CLI’s default features include
sub-agents and skills but not session-compaction or memory-store. You must enable them explicitly.AgentFactory has an enable_memory flag (default: false). Per-request builds can override it via AgentBuildConfig::override_memory.
CompactionConfig
Controls when and how compaction runs.| Field | Type | Default | Description |
|---|---|---|---|
auto_compact_threshold | u64 | 100_000 | Compaction triggers when last_input_tokens >= threshold OR estimated_history_tokens >= threshold. |
recent_turn_budget | usize | 4 | Number of recent complete turns to retain after compaction. A turn is a User message followed by all subsequent non-User messages until the next User message. |
max_summary_tokens | u32 | 4096 | Maximum tokens the LLM may produce for the compaction summary. |
min_turns_between_compactions | u32 | 3 | Minimum turns that must elapse between consecutive compactions (loop guard to prevent runaway compaction). |
How compaction triggers
Compaction is checked at every turn boundary, just before the next LLM call. The decision flow:Loop guard check
If compaction occurred at turn N, no compaction until turn N +
min_turns_between_compactions.What happens during compaction
When compaction triggers:Send compaction prompt to LLM
The current conversation history plus a compaction prompt is sent to the LLM with no tools and
max_summary_tokens as the response limit.Handle result
On failure: a CompactionFailed event is emitted and the session is not mutated (safe failure).On success:
DefaultCompactor::rebuild_history produces new messages:- System prompt is preserved verbatim (if present).
- A summary message is injected as a User message with the prefix
[Context compacted]. - The last
recent_turn_budgetcomplete turns are retained. - All other messages become
discarded.
The compaction prompt
The compaction prompt
The compactor sends this prompt to the LLM:
You are performing a CONTEXT COMPACTION. Your job is to create a handoff summary so work can continue seamlessly. Include:Be concise and structured. Prioritize information the next context needs to act, not narrate.
- Current progress and key decisions made
- Important context, constraints, or user preferences discovered
- What remains to be done (clear next steps)
- Any critical data, file paths, examples, or references needed to continue
- Tool call patterns that worked or failed
Memory indexing after compaction
When both aCompactor and a MemoryStore are wired into the agent, discarded messages are indexed into semantic memory after compaction completes. This happens asynchronously (fire-and-forget via tokio::spawn).
For each discarded message:
- The message’s indexable text content is extracted via
message.as_indexable_text(). - If non-empty, it is indexed with
MemoryMetadatacontaining the session ID, current turn number, and a timestamp.
memory_search tool.
The memory_search tool
When memory is enabled, the agent gains a memory_search tool.
Tool definition
| Property | Value |
|---|---|
| Name | memory_search |
| Description | Search semantic memory for past conversation content. Memory contains text from earlier conversation turns that were compacted away to save context space. |
Parameters
Natural language search query describing what you want to recall.
Maximum number of results to return. Capped at 20.
Response format
Returns a JSON array of result objects:The text content of the memory entry.
Similarity score from 0.0 (no match) to 1.0 (exact match). Typical useful matches are above 0.7.
The session the memory originated from (enables cross-session recall).
The turn number within the session when the memory was indexed.
Memory store implementations
- HnswMemoryStore (production)
- SimpleMemoryStore (test-only)
Uses:
- hnsw_rs (v0.3) for approximate nearest-neighbor search with cosine distance.
- redb for persistent metadata and text storage.
.rkat/memory/memory.redbKey characteristics:- Embedding: Bag-of-words TF with hash-based dimensionality reduction (4096-dimensional vectors, L2-normalized). Each word is hashed to a bucket and its presence increments that dimension.
- Persistence: Data survives process restart. On
open(), all existing entries are re-indexed into the HNSW graph from redb. - Score conversion: HNSW cosine distance (0 = identical, 2 = opposite) is converted to a 0..1 similarity score:
score = 1.0 - (distance / 2.0). - Thread safety: Insertions are serialized via a
Mutexto couple point ID allocation with successful writes. Searches use aRwLockfor concurrent reads. - Constants:
MAX_NB_CONNECTION = 16,MAX_LAYER = 16,EF_CONSTRUCTION = 200,DEFAULT_MAX_ELEMENTS = 100_000.
How memory gets wired
When thememory-store-session feature is compiled in and memory is enabled:
- An
HnswMemoryStoreis opened at{store_path}/memory/. - The
memory_searchtool is added to the agent’s tool set. - A
DefaultCompactoris attached (ifsession-compactionis also enabled). - A built-in
memory-retrievalskill is injected into the system prompt, teaching the agent how to use memory search.
Examples
- CLI
- SDK
Custom CompactionConfig
See also
- Configuration: memory and compaction - config file settings
- Architecture - how compaction fits into the agent loop
