Portable semantic memory system with Layer 0 Scrubber for YAMO agents
npm install @yamo/memory-meshPortable, semantic memory system for AI agents with automatic Layer 0 sanitization.
Built on the YAMO Protocol for transparent agent collaboration with structured workflows and immutable provenance.
- Persistent Vector Storage: Powered by LanceDB for semantic search.
- Layer 0 Scrubber: Automatically sanitizes, deduplicates, and cleans content before embedding.
- Local Embeddings: Runs 100% locally using ONNX (no API keys required).
- Portable CLI: Simple JSON-based interface for any agent or language.
- YAMO Skills Integration: Includes yamo-super workflow system with automatic memory learning.
- Pattern Recognition: Workflows automatically store and retrieve execution patterns for optimization.
- LLM-Powered Reflections: Generate insights from memories using configurable LLM providers.
- YAMO Audit Trail: Automatic emission of structured blocks for all memory operations.
``bash`
npm install @yamo/memory-mesh
`bashStore a memory (automatically scrubbed & embedded)
memory-mesh store "My important memory" '{"tag":"test"}'
$3
`javascript
import { MemoryMesh } from '@yamo/memory-mesh';const mesh = new MemoryMesh();
await mesh.add('Content', { meta: 'data' });
const results = await mesh.search('query');
`$3
MemoryMesh supports LLM-powered reflection generation that synthesizes insights from stored memories:
`javascript
import { MemoryMesh } from '@yamo/memory-mesh';// Enable LLM integration (requires API key or local model)
const mesh = new MemoryMesh({
enableLLM: true,
llmProvider: 'openai', // or 'anthropic', 'ollama'
llmApiKey: process.env.OPENAI_API_KEY,
llmModel: 'gpt-4o-mini'
});
// Store some memories
await mesh.add('Bug: type mismatch in keyword search', { type: 'bug' });
await mesh.add('Bug: missing content field', { type: 'bug' });
// Generate reflection (automatically stores result to memory)
const reflection = await mesh.reflect({ topic: 'bugs', lookback: 10 });
console.log(reflection.reflection);
// Output: "Synthesized from 2 memories: Bug: type mismatch..., Bug: missing content..."
console.log(reflection.confidence); // 0.85
console.log(reflection.yamoBlock); // YAMO audit trail
`CLI Usage:
`bash
With LLM (default)
memory-mesh reflect '{"topic": "bugs", "limit": 10}'Without LLM (prompt-only for external LLM)
memory-mesh reflect '{"topic": "bugs", "llm": false}'
`$3
MemoryMesh automatically emits YAMO blocks for all operations when enabled:
`javascript
const mesh = new MemoryMesh({ enableYamo: true });// All operations now emit YAMO blocks
await mesh.add('Memory content', { type: 'event' }); // emits 'retain' block
await mesh.search('query'); // emits 'recall' block
await mesh.reflect({ topic: 'test' }); // emits 'reflect' block
// Query YAMO log
const yamoLog = await mesh.getYamoLog({ operationType: 'reflect', limit: 10 });
console.log(yamoLog);
// [{ id, agentId, operationType, yamoText, timestamp, ... }]
`Using in a Project
To use MemoryMesh with your Claude Code skills (like
yamo-super) in a new project:$3
`bash
npm install @yamo/memory-mesh
`$3
This installs YAMO skills to
~/.claude/skills/memory-mesh/ and tools to ./tools/:`bash
npx memory-mesh-setup
`The setup script will:
- Copy YAMO skills (
yamo-super, scrubber) to Claude Code
- Copy CLI tools to your project's tools/ directory
- Prompt before overwriting existing files$3
Your skills are now available in Claude Code with automatic memory integration:
`bash
Use yamo-super workflow system
Automatically retrieves similar past workflows and stores execution patterns
claude /yamo-super
`Memory Integration Features:
- Workflow Orchestrator: Searches for similar past workflows before starting
- Design Phase: Stores validated designs with metadata
- Debug Phase: Retrieves similar bug patterns and stores resolutions
- Review Phase: Stores code review outcomes and quality metrics
- Complete Workflow: Stores full execution pattern for future optimization
YAMO agents will automatically find tools in
tools/memory_mesh.js.Docker
`bash
docker run -v $(pwd)/data:/app/runtime/data \
yamo/memory-mesh store "Content"
`About YAMO Protocol
Memory Mesh is built on the YAMO (Yet Another Markup for Orchestration) Protocol - a structured language for transparent AI agent collaboration with immutable provenance tracking.
YAMO Protocol Features:
- Structured Agent Workflows: Semicolon-terminated constraints, explicit handoff chains
- Meta-Reasoning Traces: Hypothesis, rationale, confidence, and observation annotations
- Blockchain Integration: Immutable audit trails via Model Context Protocol (MCP)
- Multi-Agent Coordination: Designed for transparent collaboration across organizational boundaries
Learn More:
- YAMO Protocol Organization: github.com/yamo-protocol
- Protocol Specification: See the YAMO RFC documents for core syntax and semantics
- Ecosystem: Explore other YAMO-compliant tools and skills
Memory Mesh implements YAMO v2.1.0 compliance with:
- MemorySystemInitializer agent for graceful degradation
- Context passing between agents (
from_AgentName.output)
- Structured logging with meta-reasoning
- Priority levels and constraint-based execution
- Automatic workflow pattern storage for continuous learningRelated YAMO Projects:
- yamo-chain - Blockchain integration for agent provenance
Documentation
- Architecture Guide: docs/ARCHITECTURE.md - Comprehensive system architecture (1,118 lines)
- Development Guide: CLAUDE.md - Guide for Claude Code development
- Marketplace: .claude-plugin/marketplace.json - Plugin metadata
Configuration
$3
`bash
Required for LLM-powered reflections
LLM_PROVIDER=openai # Provider: 'openai', 'anthropic', 'ollama'
LLM_API_KEY=sk-... # API key for OpenAI/Anthropic
LLM_MODEL=gpt-4o-mini # Model name
LLM_BASE_URL=https://... # Optional: Custom API base URL
`Supported Providers:
- OpenAI: GPT-4, GPT-4o-mini, etc.
- Anthropic: Claude 3.5 Haiku, Sonnet, Opus
- Ollama: Local models (llama3.2, mistral, etc.)
$3
`bash
Optional YAMO settings
ENABLE_YAMO=true # Enable YAMO block emission (default: true)
YAMO_DEBUG=true # Enable verbose YAMO logging
`$3
`bash
Vector database settings
LANCEDB_URI=./runtime/data/lancedb
LANCEDB_MEMORY_TABLE=memory_entries
`$3
`bash
Embedding model settings
EMBEDDING_MODEL_TYPE=local # 'local', 'openai', 'cohere', 'ollama'
EMBEDDING_MODEL_NAME=Xenova/all-MiniLM-L6-v2
EMBEDDING_DIMENSION=384
`$3
`bash
LLM for reflections
LLM_PROVIDER=openai
LLM_API_KEY=sk-your-key-here
LLM_MODEL=gpt-4o-miniYAMO audit
ENABLE_YAMO=true
YAMO_DEBUG=falseVector DB
LANCEDB_URI=./data/lancedbEmbeddings (local default)
EMBEDDING_MODEL_TYPE=local
EMBEDDING_MODEL_NAME=Xenova/all-MiniLM-L6-v2
``