MCP client configuration for REMBR - semantic memory for AI agents, RLMs, and multi-agent systems
npm install @rembr/clientClient configuration package for REMBR - semantic memory for AI agents and assistants.
REMBR is a hosted Model Context Protocol (MCP) server that provides persistent, searchable semantic memory for AI agents. It's designed for:
- GitHub Copilot Agents - Give your agents long-term memory
- Cursor - Persistent context for Cursor AI
- Windsurf - Cascade flow coordination with shared memory
- Claude Desktop - Enhanced context across conversations
- Recursive Learning Machines (RLMs) - Context management for recursive decomposition
- Multi-Agent Systems - Shared knowledge base across agent teams
⨠19 MCP Tools for comprehensive memory management:
- Core Memory: store, search, update, delete, list
- Advanced Search: phrase search, semantic search, metadata filtering
- Discovery: find similar memories, get embedding stats
- RLM Support: contexts, snapshots, memory graphs
- Analytics: usage stats, contradiction detection, insights
š 4 Search Modes:
- Hybrid (default) - 0.7 semantic + 0.3 text matching
- Semantic - Conceptual similarity (finds "OAuth" when you search "authentication")
- Text - Fast fuzzy keyword matching
- Phrase - Multi-word exact matching ("rate limiting" not "limit the rate")
šÆ Built for RLMs: Task isolation, metadata filtering, progressive refinement
š° Affordable: From free (1,000 memories) to Ā£99/mo (1M memories)
``bash`
npm install -D @rembr/client configure:
1. MCP server connections for VS Code, Cursor, Windsurf, and Claude Desktop
2. Agent instructions for each tool
3. Project-specific configuration files
Or run setup manually:
`bash`
npx rembr setup
Adds REMBR MCP server to:
- .vscode/mcp.json - VS Code + GitHub Copilot.cursor/mcp.json
- - Cursor.windsurf/mcp.json
- - Windsurf~/Library/Application Support/Claude/claude_desktop_config.json
- - Claude Desktop (global)
`json`
{
"servers": {
"rembr": {
"url": "https://rembr.ai/mcp",
"type": "http"
}
}
}
Creates tool-specific instruction files:
1. GitHub Copilot (.github/agents/recursive-analyst.agent.md)
- Formal agent definition with tool access
- RLM pattern implementation
- Structured subagent protocol
2. Cursor (.cursorrules)
- Cursor-specific REMBR integration patterns
- Memory management examples
- Subtask coordyour tool**
VS Code / GitHub Copilot:
- Open Settings (Cmd+, or Ctrl+,) ā Search "MCP"
- Add to .vscode/settings.json:`
json`
{
"mcp.servers.rembr.env": {
"REMBR_API_KEY": "rembr_live_xxxxxxxxxxxx"
}
}
@Recursive-Analyst what tasks have I worked on?
- Reload window: Cmd+Shift+P ā "Developer: Reload Window"
- Test:
Cursor:
- Settings ā MCP ā rembr ā Environment Variables
- Add REMBR_API_KEY=rembr_live_xxxxxxxxxxxx
- Restart Cursor
- Test: Ask Cursor to "search REMBR for authentication patterns"
Windsurf:
- Settings ā MCP ā rembr ā Environment Variables
- Add REMBR_API_KEY=rembr_live_xxxxxxxxxxxx
- Restart Windsurf
- Test: Use REMBR tools in a Cascade flow
Claude Desktop:
- Edit ~/Library/Application Support/Claude/claude_desktop_config.jsonmcpServers.rembr
- Add under :`
json`
{
"mcpServers": {
"rembr": {
"url": "https://rembr.ai/mcp",
"type": "http",
"env": {
"REMBR_API_KEY": "rembr_live_xxxxxxxxxxxx"
}
}
}
}
- Restart Claude Desktop
- Test: "Use REMBR to remember that I prefer TypeScript"
Aider:
- Export in your shell: export REMBR_API_KEY=rembr_live_xxxxxxxxxxxx.aider.conf.yml
- Use the bash aliases from rembr-query "recent changes"
- Test:
- Go to Dashboard ā Settings ā API Keys
- Create a new key
3. Configure VS Code
- Open VS Code Settings (Cmd+, or Ctrl+,)
- Search for "MCP"
- Add environment variable: REMBR_API_KEY=your_key_here
Or add to .vscode/settings.json:
``json`
{
"mcp.servers.rembr.env": {
"REMBR_API_KEY": "rembr_live_xxxxxxxxxxxx"
}
}
4. Reload VS Code
- Cmd+Shift+P ā "Developer: Reload Window"
5. Test the connection
- Open GitHub Copilot Chat
- Try: @Recursive-Analyst what tasks have I worked on recently?
Once configured, these tools are available to all agents:
- Store new memories with categories and metadata
- search_memory - Hybrid semantic + text search
- list_memories - List recent memories by category
- get_memory - Retrieve specific memory by ID
- delete_memory - Remove a memory$3
- create_context - Create workspace for related memories
- add_memory_to_context - Link memories to contexts
- search_context - Scoped search within a context
In GitHub Copilot / Cursor / Windsurf / Claude Desktop:`javascript
store_memory({
category: "facts",
content: "The payment API uses Stripe webhooks for event processing",
metadata: {
area: "payments",
file: "src/webhooks/stripe.ts"
}
})
`In Aider (via bash alias):
In MCP-enabled tools:
`javascript
search_memory({
query: "how do we handle Stripe webhooks",
category: "facts",
limit: 5
})
`In Aider:
`bash
rembr-query "how do we handle Stripe webhooks" Memory CategoriesOrganize memories using semantic categories:
- facts - Concrete information and data points
- preferences - User preferences and settings
- conversations - Conversation history and context
- projects - Project-specific information
- learning - Knowledge and insights learned
- goals - Objectives and targets
- context - Situational context
- reminders - Future actions and reminders
Example Usage
$3
`javascript
// In GitHub Copilot Chat or agent
store_memory({
category: "facts",
content: "The payment API uses Stripe webhooks for event processing",
metadata: {
area: "payments",
file: "src/webhooks/stripe.ts"
}
})
`/Flow ContextGitHub Copilot (Subagents):
`javascript
// Parent agent retrieves context for subagent
const context = search_memory({
query: "authentication middleware patterns",
category: "facts",
limit: 10
})// Spawn subagent with this context
// Subagent stores findings with taskId metadata
// Parent retrieves subagent findings later
`Windsurf (Cascade Flows):
`javascript
// Before creating a flow, retrieve relevant context
const context = search_memory({
query: "API rate limiting patterns",
category: "facts"
})// Create flow with this context
// Flow stores findings for other flows to use
`Cursor / Claude Desktop:
`javascript
// Before breaking down a task, query REMBR
const priorWork = search_memory({
query: "database migration strategies",
limit: 5
})// Use context to inform implementation
// Store new insights back to REMBR
`javascript
// Parent agent retrieves context for subagent
const context = search_memory({
query: "authentication middleware patterns",
category: "facts",
limit: 10
})// Spawn subagent with this context
// Subagent stores findings with taskId metadata
// Parent retrieves subagent findings later
``- Free Dev Tier: 100 memories, 1,000 searches/day
- Pro: £10/month - 10,000 memories, 100,000 searches
- Team: £30/month - 100,000 memories, 1M searches
- Enterprise: £100/month - 1M memories, unlimited searches
All tiers include:
- Hybrid semantic + text search
- Multi-tenant isolation
- OAuth support for Claude Desktop
- Full API access
- Website: https://rembr.ai
- API Docs: https://rembr.ai/docs
- MCP Spec: https://modelcontextprotocol.io
- RLM Paper: https://github.com/alexzhang13/rlm
- Email: support@rembr.ai
- Issues: https://github.com/radicalgeek/rembr-client/issues
- Slack: https://rembr.ai/slack
MIT