Local AI memory - Orama vector DB + Transformers.js embeddings. No cloud, no API keys.
npm install @superlocalmemory/coreLocal AI memory - no cloud, no API keys. Your AI remembers across sessions, 100% locally.


``bashnpm
npm install opencode-superlocalmemory
Add to
~/.config/opencode/opencode.json:`json
{
"plugin": ["opencode-superlocalmemory"]
}
`Restart OpenCode. Done.
$3
`bash
npm
npm install @superlocalmemory/corepnpm
pnpm add @superlocalmemory/core
``typescript
import { createMemoryStore } from "@superlocalmemory/core";const store = await createMemoryStore();
await store.add("User prefers dark mode", "user_tag", { type: "preference" });
const results = await store.search("preferences", "user_tag");
`$3
`bash
cd supermemory-local
pnpm build
node packages/mcp/dist/index.js
`Add to Claude Desktop config:
`json
{
"mcpServers": {
"memory": {
"command": "node",
"args": ["/path/to/supermemory-local/packages/mcp/dist/index.js"]
}
}
}
`$3
`bash
cd docker
docker compose up -d
API at http://localhost:3333
`Features
$3
On first message, your AI receives:
`
[SUPERLOCALMEMORY CONTEXT]
User Profile
- Prefers concise responses
- Expert in TypeScriptProject Context
- Uses pnpm, not npm
- Build command: pnpm build
[/SUPERLOCALMEMORY CONTEXT]
`This happens automatically - no prompting needed.
$3
Say "remember", "save this", "don't forget" and the AI auto-saves:
`
You: "Remember that this project uses bun"
AI: [saves to project memory]
`$3
Everything stays on your machine. No API calls, no cloud storage.
Use
tags to prevent sensitive data from being stored:`
My API key is sk-abc123
`Content in
tags is replaced with [REDACTED] before saving.$3
When context hits 80% of model limit:
1. Injects project memories into compaction prompt
2. Triggers OpenCode's summarization
3. Saves session summary as a memory
This preserves context across long sessions.
Tool Usage
The
memory tool is available to your AI:| Mode | Args | Description |
|------|------|-------------|
|
add | content, type?, scope? | Store memory |
| search | query, scope?, limit? | Semantic search |
| list | scope?, limit? | List memories |
| delete | memoryId | Remove memory |
| profile | - | View user facts |
| help | - | Show commands |$3
-
user - Cross-project (preferences, patterns)
- project - Project-specific (default)$3
-
preference - User preferences
- project-config - Project settings
- architecture - Design decisions
- error-solution - Bug fixes
- learned-pattern - Code patternsExamples
`
Save a preference
memory mode:add content:"User prefers dark mode" scope:user type:preferenceSearch memories
memory mode:search query:"build commands"List project memories
memory mode:list scope:project limit:10Delete a memory
memory mode:delete memoryId:mem_123abc
`How It Works
| Component | Tech | Purpose |
|-----------|------|---------|
| Vector DB | Orama | Fast embedded search |
| Embeddings | Transformers.js | Local ML, no API |
| Storage | JSON file |
~/.superlocalmemory/memories.json |First query downloads the embedding model (~30MB). Subsequent queries are instant.
Configuration
Create
~/.config/opencode/superlocalmemory.json:`json
{
"dataPath": "~/.superlocalmemory",
"similarityThreshold": 0.6,
"maxMemories": 5,
"maxProjectMemories": 10,
"compactionThreshold": 0.8,
"embeddingModel": "Xenova/all-MiniLM-L6-v2",
"debug": false
}
`| Option | Default | Description |
|--------|---------|-------------|
|
dataPath | ~/.superlocalmemory | Where memories are stored |
| similarityThreshold | 0.6 | Min similarity for search results |
| maxMemories | 5 | Max memories injected per request |
| maxProjectMemories | 10 | Max project memories listed |
| compactionThreshold | 0.8 | Context usage ratio that triggers compaction |
| embeddingModel | Xenova/all-MiniLM-L6-v2 | Local embedding model (or "none") |
| debug | false | Enable debug logging to ~/.superlocalmemory.log |Development
`bash
pnpm install
pnpm build
pnpm test
`vs Supermemory
| Feature | superlocalmemory | supermemory |
|---------|------------------|-------------|
| Privacy | 100% local | Cloud API |
| API Key | Not needed | Required |
| Cost | Free | Paid |
| Setup | Clone & run | Install + signup |
| Embeddings | Local (Transformers.js) | Cloud |
| Context injection | Yes | Yes |
| Keyword detection | Yes | Yes |
| Privacy tags | Yes | Yes |
| Preemptive compaction | Yes | Yes |
Architecture
`
packages/
core/ # Memory engine (Orama + embeddings)
mcp/ # MCP server (stdio + HTTP)
opencode-plugin/# OpenCode integration
docker/ # Docker deployment
``MIT