Persistent memory plugin for OpenCode — captures, compresses, and recalls context across coding sessions
npm install open-mem


Persistent memory for OpenCode — captures, compresses, and recalls context across coding sessions.
- OpenCode (the AI coding assistant)
- Bun >= 1.0
``bash`
bun add open-mem
Add open-mem to the plugin array in your OpenCode config (~/.config/opencode/opencode.json):
`json`
{
"plugin": ["open-mem"]
}
> Note: If you already have plugins, just append "open-mem" to the existing array.
That's it. open-mem starts capturing from your next OpenCode session.
For intelligent compression of observations, configure an AI provider:
Google Gemini (default — free tier):
`bash`Get a free key at https://aistudio.google.com/apikey
export GOOGLE_GENERATIVE_AI_API_KEY=...
Anthropic:
`bash`
export OPEN_MEM_PROVIDER=anthropic
export ANTHROPIC_API_KEY=sk-ant-...
export OPEN_MEM_MODEL=claude-sonnet-4-20250514
AWS Bedrock:
`bash`
export OPEN_MEM_PROVIDER=bedrock
export OPEN_MEM_MODEL=us.anthropic.claude-sonnet-4-20250514-v1:0Uses AWS credentials from environment (AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY or AWS_PROFILE)
OpenAI (requires bun add @ai-sdk/openai):`bash`
export OPEN_MEM_PROVIDER=openai
export OPENAI_API_KEY=sk-...
export OPEN_MEM_MODEL=gpt-4o
OpenRouter (100+ models):
`bash`
export OPEN_MEM_PROVIDER=openrouter
export OPENROUTER_API_KEY=sk-or-...
export OPEN_MEM_MODEL=google/gemini-2.5-flash-lite
Auto-detection: open-mem detects your provider from environment variables: GOOGLE_GENERATIVE_AI_API_KEY → Google, ANTHROPIC_API_KEY → Anthropic, AWS credentials → Bedrock, OPENROUTER_API_KEY → OpenRouter.
Without any provider configured, open-mem still works — it falls back to a basic metadata extractor that captures tool names, file paths, and output snippets.
- Native vector search — sqlite-vec embedded directly in SQLite, no external vector database required
- Knowledge graph — automatic entity extraction with relationships, graph-augmented search via traversal
- 5 AI providers + fallback chain — Google, Anthropic, AWS Bedrock, OpenAI, OpenRouter with automatic failover
- Advanced search — FTS5 full-text + vector similarity + Reciprocal Rank Fusion + graph traversal + reranking
- Revision lineage — immutable history with audit trail; revisions never overwrite, they supersede
- User-level memory — cross-project memories stored in a separate user-scoped database
- Web dashboard — 6-page management UI with real-time SSE streaming and config control plane
- Multi-platform — native adapters for OpenCode, Claude Code, and Cursor
- MIT license — enterprise-friendly, no AGPL restrictions
- Well-tested — 71 test files covering core logic, adapters, and integration scenarios
- 🧠 Automatic observation capture from tool executions and user prompts
- 🤖 AI-powered compression via Vercel AI SDK — supports Anthropic, AWS Bedrock, OpenAI, Google (optional — works without API key)
- 🔍 Hybrid search — FTS5 full-text search + vector embeddings with Reciprocal Rank Fusion
- 💡 Progressive disclosure with token-cost-aware context injection and ROI tracking
- 🔒 Privacy controls with tag support
- 🛠️ Nine custom tools: memory.find, memory.create, memory.history, memory.get, memory.transfer.export, memory.transfer.import, memory.revise, memory.remove, memory.help
- 🌐 MCP server mode — expose memory tools to any MCP-compatible AI client
- 🔗 Knowledge graph — entity extraction with relationships, graph-augmented search
- 🔄 Multi-platform — native adapters for OpenCode, Claude Code, and Cursor
- 🌳 Git worktree support — shared memory across all worktrees
- 📂 AGENTS.md generation — auto-generated folder-level context on session end
- 📊 Web dashboard — 6-page management UI with real-time streaming
- 📦 Import/export — portable JSON for backup and transfer between machines
- ⚡ Zero-config setup — works out of the box
- 📁 All data stored locally in your project directory
open-mem runs in the background as an OpenCode plugin. When you use tools (reading files, running commands, editing code), it captures what happened. During idle time, it compresses those captures into structured observations using AI. At the start of your next session, it injects a compact memory index into the system prompt — so your agent knows what you've been working on.
``
┌──────────────────────────────────────────────────────────────┐
│ OpenCode │
│ │
│ tool.execute.after ───> [Tool Capture Hook] │
│ chat.message ─────────> [Chat Capture Hook] │
│ │ │
│ v │
│ [Pending Queue] │
│ │ │
│ session.idle ─────────> [Queue Processor] │
│ │ │
│ ┌─────┴─────┐ │
│ v v │
│ [AI Compressor] [Embedding Gen] │
│ │ │ │
│ v v │
│ [SQLite + FTS5 + Vectors] │
│ │ │
│ system.transform <─── [Context Injector + ROI Footer] │
│ │
│ session.end ──────────> [AGENTS.md Generation] │
│ │
│ memory.find ─────────> [Hybrid Search (FTS5 + Vector/RRF)] │
│ memory.create ───────────> [Direct Save] │
│ memory.history ───────> [Session Query] │
│ memory.get ─────────> [Full Observation Fetch] │
│ memory.transfer.export ─────────> [JSON Export] │
│ memory.transfer.import ─────────> [JSON Import] │
│ memory.revise ─────────> [Create Revision] │
│ memory.remove ─────────> [Tombstone Observation] │
│ memory.help ──────────> [Workflow Guidance] │
│ │
│ ┌──────────────────────────────────────────┐ │
│ │ MCP Server (stdin/stdout, JSON-RPC 2.0) │ │
│ │ Exposes tools to any MCP-compatible AI │ │
│ └──────────────────────────────────────────┘ │
└──────────────────────────────────────────────────────────────┘
When you use tools in OpenCode (reading files, running commands, editing code), open-mem's tool.execute.after hook captures each execution as a pending observation. Sensitive content (API keys, tokens, passwords) is automatically redacted, and blocks are stripped.
On session.idle, the queue processor batches pending observations and sends them to the configured AI provider for semantic compression. Each raw tool output is distilled into a structured observation with:
- Type classification (decision, bugfix, feature, refactor, discovery, change)
- Title and narrative summary
- Key facts extracted
- Concepts/tags for search
- Files involved
If no API key is set, a fallback compressor extracts basic metadata without AI.
open-mem injects a compact index into the system prompt at session start. Each entry shows a type icon, title, token cost, and related files — giving the agent a map of what's in memory without consuming the full context window.
The agent sees what exists and decides what to fetch using memory.find and memory.get. This minimizes context window usage while providing full access to all stored observations.
Example of an injected index entry:
``
🔧 [refactor] Extract pricing logic (~120 tokens) — src/pricing.ts
💡 [discovery] FTS5 requires specific tokenizer config (~85 tokens)
During session compaction (experimental.session.compacting), open-mem injects memory context to preserve important information across compaction boundaries.
When an AI provider with embedding support is configured (Google, OpenAI, or AWS Bedrock), open-mem generates vector embeddings for observations and uses Reciprocal Rank Fusion (RRF) to merge FTS5 text search with vector similarity search. This significantly improves search relevance.
Embeddings are generated automatically during observation processing. If no embedding model is available (e.g., Anthropic, which doesn't offer embeddings), search falls back to FTS5-only — no degradation.
open-mem captures user messages via the chat.message hook, storing them as searchable observations. This preserves the intent behind tool executions — so future sessions can understand not just what happened, but why.
open-mem automatically detects git worktrees and resolves to the main repository root. All worktrees share the same memory database, so observations from one worktree are available in all others.
On session end, open-mem auto-generates AGENTS.md files in project folders that were touched during the session. These files contain a managed section (between tags) with recent activity, key concepts, and decisions for that folder.
User content outside the managed tags is preserved. Disable with OPEN_MEM_FOLDER_CONTEXT=false.
Modes:
- Dispersed (default): Creates AGENTS.md in each touched folder with activity for that folder### src/tools/
- Single: Creates one root file with all folder activity grouped by section headers (, ### src/hooks/, etc.)
Configure via OPEN_MEM_FOLDER_CONTEXT_MODE=single or OPEN_MEM_FOLDER_CONTEXT_FILENAME=CLAUDE.md.
The context injector includes a "Memory Economics" footer showing how much context compression saves: read cost vs. original discovery cost, with a savings percentage. This helps you understand the value of AI compression at a glance.
open-mem includes a built-in web dashboard for memory management and observability. It provides six pages:
| Page | Description |
|------|-------------|
| Timeline | Chronological view of all observations with type filtering |
| Sessions | Browse past coding sessions and their observations |
| Search | Full-text and semantic search across all memories |
| Stats | Database statistics, observation counts, and memory economics |
| Operations | Queue status, maintenance actions, folder context management |
| Settings | Config control plane with live preview, mode presets, and audit log |
`bash`
export OPEN_MEM_DASHBOARD=true
Access at http://localhost:3737 (configurable via OPEN_MEM_DASHBOARD_PORT). The dashboard streams real-time updates via Server-Sent Events — new observations appear as they are captured.
The Config Control Plane is accessible through the Settings page, allowing you to preview, apply, and roll back configuration changes without restarting.
Search through past observations and session summaries. Uses hybrid search (FTS5 + vector embeddings) when an embedding-capable provider is configured, or FTS5-only otherwise.
| Argument | Type | Required | Description |
|----------|------|----------|-------------|
| query | string | yes | Search query (keywords, phrases, file paths) |type
| | enum | no | Filter by type: decision, bugfix, feature, refactor, discovery, change |limit
| | number | no | Max results (1–50, default: 10) |
Manually save an important observation to memory.
| Argument | Type | Required | Description |
|----------|------|----------|-------------|
| title | string | yes | Brief title (max 80 chars) |type
| | enum | yes | Observation type: decision, bugfix, feature, refactor, discovery, change |narrative
| | string | yes | Detailed description of what to remember |concepts
| | string[] | no | Related concepts/tags |files
| | string[] | no | Related file paths |
View a timeline of past coding sessions, or center the view around a specific observation for cross-session navigation.
| Argument | Type | Required | Description |
|----------|------|----------|-------------|
| limit | number | no | Number of recent sessions (1–20, default: 5) |sessionId
| | string | no | Show details for a specific session |anchor
| | string | no | Observation ID to center the timeline around (cross-session view) |depthBefore
| | number | no | Observations to show before anchor (0–20, default: 5) |depthAfter
| | number | no | Observations to show after anchor (0–20, default: 5) |
Fetch full observation details by ID. Use after memory.find to get complete narratives, facts, concepts, and file lists for specific observations.
| Argument | Type | Required | Description |
|----------|------|----------|-------------|
| ids | string[] | yes | Observation IDs to fetch |limit
| | number | no | Maximum number of results (1–50, default: 10) |
Export project memories (observations and session summaries) as portable JSON for backup or transfer between machines.
| Argument | Type | Required | Description |
|----------|------|----------|-------------|
| format | enum | no | Export format (currently json only) |type
| | enum | no | Filter by observation type |limit
| | number | no | Maximum observations to export |
Import observations and summaries from a JSON export. Skips duplicates by ID.
| Argument | Type | Required | Description |
|----------|------|----------|-------------|
| data | string | yes | JSON string from a memory.transfer.export output |
Update an existing project observation by ID.
This is immutable: the update creates a new revision and supersedes the previous active revision.
| Argument | Type | Required | Description |
|----------|------|----------|-------------|
| id | string | yes | Observation ID to update |title
| | string | no | Updated title |narrative
| | string | no | Updated narrative |type
| | enum | no | Updated observation type |concepts
| | string[] | no | Updated concepts/tags |importance
| | number | no | Updated importance (1-5) |
Tombstone an existing project observation by ID.
This is a soft delete: the observation is hidden from default recall/search but retained for lineage.
| Argument | Type | Required | Description |
|----------|------|----------|-------------|
| id | string | yes | Observation ID to delete |
Returns a short workflow guide for using memory tools effectively:
memory.find -> memory.history -> memory.get, plus write/edit/import/export patterns.
open-mem includes a standalone MCP (Model Context Protocol) server that exposes memory tools to any MCP-compatible AI client — not just OpenCode.
Run the MCP server:
`bash`
bunx open-mem-mcp --project /path/to/your/project
Or add it to your MCP client config:
`json`
{
"mcpServers": {
"open-mem": {
"command": "bunx",
"args": ["open-mem-mcp", "--project", "/path/to/your/project"]
}
}
}
The server communicates over stdin/stdout using JSON-RPC 2.0 and exposes: memory.find, memory.create, memory.history, memory.get, memory.transfer.export, memory.transfer.import, memory.revise, memory.remove, memory.help.
Lifecycle behavior:
- initialize negotiates protocol version (default 2024-11-05)notifications/initialized
- is supportedtools/list
- strict mode requires initialize before /tools/call
open-mem works beyond OpenCode. Dedicated adapter workers bring the same memory capabilities to Claude Code and Cursor, ingesting JSON events over stdin:
`bashClaude Code adapter worker
bunx open-mem-claude-code --project /path/to/project
Each line on stdin must be one JSON event. The workers normalize events into open-mem's shared platform schema and reuse the same capture/lifecycle pipeline used by OpenCode hooks.
Each line receives a JSON response on stdout:
- success:
{"ok":true,"code":"OK","ingested":true}
- parse error: {"ok":false,"code":"INVALID_JSON",...}
- schema mismatch: {"ok":false,"code":"UNSUPPORTED_EVENT",...}Optional worker commands:
-
{"command":"flush"} to force queue processing
- {"command":"health"} to get worker queue status
- {"command":"shutdown"} to request graceful shutdownOptional HTTP bridge mode:
`bash
bunx open-mem-claude-code --project /path/to/project --http-port 37877
`Endpoints:
-
POST /v1/events (same envelope/response semantics as stdio)
- GET /v1/healthEnable these adapters via env vars:
-
OPEN_MEM_PLATFORM_CLAUDE_CODE=true
- OPEN_MEM_PLATFORM_CURSOR=trueData Model Notes
- Local-first storage remains project-local in
.open-mem/ (plus optional user-level DB).
- memory.revise uses revision lineage, not in-place mutation.
- memory.remove uses tombstones, not hard delete, for safer auditability and conflict handling.
- Pre-0.7.0 databases are not auto-migrated to lineage semantics. Use:`bash
bunx open-mem-maintenance reset-db --project /path/to/your/project
`Config Control Plane
open-mem now supports a canonical project config file at
.open-mem/config.json, in addition to environment variables.Precedence:
1. defaults
2.
.open-mem/config.json
3. environment variables
4. programmatic overridesDashboard config APIs:
-
GET /api/config/schema
- GET /api/config/effective
- POST /api/config/preview
- PATCH /api/config
- GET /api/modes
- POST /api/modes/:id/apply
- GET /api/health
- GET /api/metricsConfiguration
open-mem works out of the box with zero configuration. All settings can be customized via environment variables:
| Variable | Default | Description |
|----------|---------|-------------|
|
OPEN_MEM_PROVIDER | google | AI provider: google, anthropic, bedrock, openai, openrouter |
| GOOGLE_GENERATIVE_AI_API_KEY | — | API key for Google Gemini provider (free) |
| ANTHROPIC_API_KEY | — | API key for Anthropic provider |
| OPENAI_API_KEY | — | API key for OpenAI provider |
| OPENROUTER_API_KEY | — | API key for OpenRouter provider |
| OPEN_MEM_FALLBACK_PROVIDERS | — | Comma-separated fallback providers (e.g., google,anthropic,openai) |
| OPEN_MEM_DB_PATH | .open-mem/memory.db | Path to SQLite database |
| OPEN_MEM_MODEL | gemini-2.5-flash-lite | Model for AI compression |
| OPEN_MEM_MAX_CONTEXT_TOKENS | 4000 | Token budget for injected context |
| OPEN_MEM_COMPRESSION | true | Set to false to disable AI compression |
| OPEN_MEM_CONTEXT_INJECTION | true | Set to false to disable context injection |
| OPEN_MEM_IGNORED_TOOLS | — | Comma-separated tool names to ignore (e.g. Bash,Glob) |
| OPEN_MEM_BATCH_SIZE | 5 | Observations per processing batch |
| OPEN_MEM_RETENTION_DAYS | 90 | Delete observations older than N days (0 = forever) |
| OPEN_MEM_LOG_LEVEL | warn | Log verbosity: debug, info, warn, error |
| OPEN_MEM_CONTEXT_SHOW_TOKEN_COSTS | true | Show token costs in context index entries |
| OPEN_MEM_CONTEXT_TYPES | all | Observation types to include in context injection |
| OPEN_MEM_CONTEXT_FULL_COUNT | 3 | Number of recent observations shown in full |
| OPEN_MEM_MAX_OBSERVATIONS | 50 | Maximum observations to consider for context |
| OPEN_MEM_FOLDER_CONTEXT | true | Set to false to disable AGENTS.md generation |
| OPEN_MEM_FOLDER_CONTEXT_MAX_DEPTH | 5 | Max folder depth for AGENTS.md generation |
| OPEN_MEM_FOLDER_CONTEXT_MODE | dispersed | Context file mode: dispersed (per-folder) or single (one root file) |
| OPEN_MEM_FOLDER_CONTEXT_FILENAME | AGENTS.md | Filename for context files (e.g. CLAUDE.md for Claude Code) |
| OPEN_MEM_PLATFORM_OPENCODE | true | Set to false to disable OpenCode adapter |
| OPEN_MEM_PLATFORM_CLAUDE_CODE | false | Set to true to enable Claude Code adapter surface |
| OPEN_MEM_PLATFORM_CURSOR | false | Set to true to enable Cursor adapter surface |
| OPEN_MEM_MCP_COMPAT_MODE | strict | MCP mode: strict or legacy |
| OPEN_MEM_MCP_PROTOCOL_VERSION | 2024-11-05 | Preferred MCP protocol version |
| OPEN_MEM_MCP_SUPPORTED_PROTOCOLS | 2024-11-05 | Comma-separated supported protocol versions |
| OPEN_MEM_DASHBOARD | false | Set to true to enable the web dashboard |
| OPEN_MEM_DASHBOARD_PORT | 3737 | Dashboard HTTP port |
Programmatic Configuration Reference
If you need to configure open-mem programmatically (e.g. for testing or custom integrations), these are the full config options:
| Option | Type | Default | Description |
|--------|------|---------|-------------|
|
dbPath | string | .open-mem/memory.db | SQLite database file path |
| provider | string | google | AI provider: google, anthropic, bedrock, openai, openrouter |
| apiKey | string | undefined | Provider API key |
| model | string | gemini-2.5-flash-lite | Model for compression |
| maxTokensPerCompression | number | 1024 | Max tokens per compression response |
| compressionEnabled | boolean | true | Enable AI compression |
| contextInjectionEnabled | boolean | true | Enable context injection |
| maxContextTokens | number | 4000 | Token budget for system prompt injection |
| batchSize | number | 5 | Observations per batch |
| batchIntervalMs | number | 30000 | Batch processing interval (ms) |
| ignoredTools | string[] | [] | Tool names to skip |
| minOutputLength | number | 50 | Minimum output length to capture |
| maxIndexEntries | number | 20 | Max observation index entries in context |
| sensitivePatterns | string[] | [] | Additional regex patterns to redact |
| retentionDays | number | 90 | Data retention period (0 = forever) |
| maxDatabaseSizeMb | number | 500 | Maximum database size |
| logLevel | string | warn | Log level: debug, info, warn, error |
| folderContextEnabled | boolean | true | Auto-generate AGENTS.md in active folders |
| folderContextMaxDepth | number | 5 | Max folder depth from project root |
| folderContextMode | string | dispersed | Context file mode: dispersed (per-folder) or single (one root file) |
| folderContextFilename | string | AGENTS.md | Filename for context files (e.g. CLAUDE.md for Claude Code) |
| fallbackProviders | string[] | undefined | Comma-separated provider names for automatic failover (e.g., ["google","anthropic"]) |Privacy & Security
$3
All data is stored locally in your project's
.open-mem/ directory. No data leaves your machine except when AI compression is enabled.$3
When AI compression is enabled, tool outputs are sent to the configured AI provider for compression. Disable with
OPEN_MEM_COMPRESSION=false to keep everything fully local.$3
open-mem automatically redacts common sensitive patterns before storage:
- API keys and tokens (e.g.
sk-ant-..., ghp_..., Bearer ...)
- Passwords and secrets
- Environment variable values matching sensitive patterns
- Custom patterns via the sensitivePatterns config option$3
Wrap any content in
tags to exclude it from memory entirely. Private blocks are stripped before observation capture — they never reach the database or the AI provider.`
This content will not be stored in memory.
`$3
Add
.open-mem/ to your .gitignore to prevent committing memory data:`bash
echo '.open-mem/' >> .gitignore
`Troubleshooting
$3
This is a warning, not an error. open-mem works without an API key — it falls back to a basic metadata extractor. To enable AI compression, configure a provider:
`bash
Google Gemini (default — free tier)
Get a free key at https://aistudio.google.com/apikey
export GOOGLE_GENERATIVE_AI_API_KEY=...Or use Anthropic
export OPEN_MEM_PROVIDER=anthropic
export ANTHROPIC_API_KEY=sk-ant-...Or use AWS Bedrock (no API key needed, uses AWS credentials)
export OPEN_MEM_PROVIDER=bedrock
export OPEN_MEM_MODEL=us.anthropic.claude-sonnet-4-20250514-v1:0
`$3
If you encounter SQLite errors, try removing the database and letting it recreate:
`bash
rm -rf .open-mem/
`$3
1. Verify the plugin is loaded: check OpenCode logs for
[open-mem] messages
2. Ensure OPEN_MEM_CONTEXT_INJECTION is not set to false
3. Check that observations exist: use the memory.history tool
4. The first session won't have context — observations must be captured first$3
If the database grows too large, adjust retention:
`bash
export OPEN_MEM_RETENTION_DAYS=30
export OPEN_MEM_MAX_CONTEXT_TOKENS=2000
`Uninstalling
1. Remove
"open-mem" from the plugin array in your OpenCode config (~/.config/opencode/opencode.json).2. Remove the package:
`bash
bun remove open-mem
`3. Optionally, delete stored memory data:
`bash
rm -rf .open-mem/
`Documentation
- Getting Started — installation, configuration, and first steps
- Architecture — internal design, data flow, and source layout
Feature Highlights
| Feature | open-mem | Typical alternatives |
|---------|----------|---------------------|
| Vector search | Native (sqlite-vec) | External service (Chroma) |
| AI providers | 5 with fallback chain | 1–3 |
| Search | FTS5 + Vector + RRF + Graph | FTS5 only |
| Knowledge graph | Entities + relationships | No |
| Revision history | Immutable lineage | No |
| Dashboard | 6-page web UI with SSE | No |
| License | MIT | AGPL / proprietary |
| Data locality | Project-local
.open-mem/` | Global |See CONTRIBUTING.md for development setup, code style, and submission guidelines.
See CHANGELOG.md for a detailed history of changes.