Semantic Web Memory for Intelligent Agents
npm install sememIt has MCP facilities (stdio & HTTP) as well as a browser-based UI with chat.
π³ Quick Start with Docker: A complete Docker Compose setup is available for easy deployment with all dependencies included. See Docker Setup Guide for one-command installation.
Interactive Chat Interface: The workbench includes a natural language chat interface with slash commands (/ask, /tell, /help) and automatic URL/file ingestion. See Chat Documentation for details.
π§ Persistent Memory System: Every chat interaction is automatically stored in the SPARQL knowledge graph with semantic embeddings. The system retrieves relevant past conversations to inform future responses, creating a continuously learning memory that persists across sessions. Memory flows: Chat UI β ChatAPI β MemoryManager β SPARQL store with embeddings β retrieval for future context.
The hints page might help.
Semem Documentation - sprawling
Semem is an experimental Node.js toolkit for AI memory management that integrates large language models (LLMs) with Semantic Web technologies (RDF/SPARQL). It offers knowledge graph retrieval and augmentation algorithms within a conceptual model based on the Ragno (knowledge graph description) and ZPT (knowledge graph navigation) ontologies. Tensegrity project.
The intuition is that while LLMs and associated techniques have massively advanced the field of AI and offer considerable utility, the typical approach is missing the elephant in the room: __the Web__ - the biggest known knowledgebase in our universe. Semantic Web technologies offer data integration at a global scale, with tried & tested conceptual models for knowledge representation. __There is a lot of low-hanging fruit.__
Core functionality is working well after significant refactoring. The tell/ask pipeline through both MCP and workbench UI is functioning correctly, with proper storage and retrieval through SPARQL. LLM provider priority system works with Groq as primary (fastest), fallback to Ollama for local processing. Enhanced logging throughout has improved debugging.
Recent fixes:
- Refactored the MCP layer, it's really the core of the system but most of it was implemented in a single huge file. Now a bit more logical (under src/mcp). To keep it real, used e2e integration tests working against live systems (under tests/integration/mcp).
Still messy areas:
- ZPT navigation concepts remain non-intuitive
- Codebase architecture needs consolidation
- Documentation sprawls and much is out-of-date
- Some test timing issues in automated environments
Current activity
- Working on a spike for visualization using self-organizing maps. The hope is it will help clarify the navigation ideas around ZPT.
- Working on auxiliary stuff around Transmissions with a view to populating knowledgebase.
- Most tests need deleting and recreating...
previously...
MCP functionality focused down on 7 core verbs : ask, tell, augment, zoom, pan, tilt, inspect. See below and this blog post (in which Claude demonstrates an inability to count).
The UI has been totally re-written to reflect this, see workbench-howto. Currently testing.
Most recent direct workflow experiment : PDF ingestion
SPARQL Document Ingestion: Use examples/ingestion/SPARQLIngest.js for importing documents from SPARQL endpoints with configurable query templates (blog-articles, generic-documents, wikidata-entities). Supports batch processing, authentication, and direct integration with the semantic memory system.
See also : blog (co-written with Claude Code)
Core Operation Flows: Semem's primary operations follow well-defined information flow patterns:
- π₯ Tell Operation Flow - Shows how content is ingested, processed through embedding generation and concept extraction, then stored in the dual SPARQL+FAISS system
- π€ Ask Operation Flow - Illustrates the multi-pass adaptive search process, context synthesis, and LLM-powered response generation
Mostly functional but very, very sketchy. It has an MCP server, HTTP API, a crude browser UI and code APIs. A lot to do before much will be remotely useful. It is in active development in June 2025. The codebase is big and chaotic, it is not for the fainthearted.
The codebase is registered as the npm package semem though there hasn't been much time spent on this angle, currently it's pretty much essential to use this repo.
The dev process has involved pushing out in various directions with spikes, then circling back to ensure the core is still functional, then consolidation. To date it's been a one-man + various AI assistants (and a dog) operation. Despite me trying to keep things modular so they can be worked on in isolation, it's still complex enough that Claude (and I) struggle. Collaborators would be very welcome.
It is feature-complete as originally conceived, in the sense of all the right notes, but not necessarily in the right order. There is a lot of cruft and numerous bugs. Right now it's in a consolidation phase.
The SPARQL store, chat LLMs and embeddings service are all external. SPARQL uses the standard HTTP interfaces. There are also in-memory and JSON file storage subsystems but these are an artifact of dev history, though they can be useful as a fallback durin testing. LLMs use the hyperdata-clients library to simplify configuration.
The system is layered in a couple of dimensions: interfacing may be direct (SDK-style) API, via the HTTP server or MCP server. Functionality is grouped by purpose broadly into Basic, Ragno and ZPT.
There are fairly comprehensive demos under examples which exercise the different parts of the system (think manual integration tests).
Internally the system relies on RDF-Ext and other RDFJS libraries for its graph model, FAISS for its primary vector-oriented functionality.
Semem has a browser-based UI in progress. This won't be useful for actual knowledge work any time soon (if ever) but it will have a role in checking system behaviour and experimenting.
---
The description below is very AI-sloppy.
Get started with Semem's natural language interface in 5 minutes:
1. Clone and install:
``bash`
git clone https://github.com/danja/semem.git
cd semem
npm install
2. Start the MCP server:
`bash`
npm run mcp:http
# OR
node mcp/http-server.js
http://localhost:4105
The server starts on with Simple Verbs REST endpoints.
Store knowledge with tell:
`bash`
curl -X POST http://localhost:4105/tell \
-H "Content-Type: application/json" \
-d '{"content": "Machine learning is a subset of AI that enables computers to learn", "type": "concept"}'
Query knowledge with ask:
`bash`
curl -X POST http://localhost:4105/ask \
-H "Content-Type: application/json" \
-d '{"question": "What is machine learning?"}'
Set context with zoom and pan:
`bashSet abstraction level
curl -X POST http://localhost:4105/zoom \
-d '{"level": "entity"}'
Check your ZPT state:
`bash
curl http://localhost:4105/state
`$3
Workbench Development:
`bash
npm run start:workbench # Start workbench server (port 8081)
npm run dev # Start webpack dev server with hot reload (port 9000)
`Legacy API servers:
`bash
npm start # Starts both API server (port 4100) and legacy UI server (port 4120)
`MCP Server for Claude Desktop:
`bash
Run MCP server for Claude Desktop integration (local dev)
npm run mcpRun MCP HTTP server (local dev)
npm run mcp:httpOr via published package (most reliable method)
git clone https://github.com/danja/semem.git
cd semem
npm install
npm run mcp # Stdio MCP server
npm run mcp:http # HTTP MCP server on port 4105Alternative: Direct node execution
node mcp/index.js # Stdio MCP server
node mcp/http-server.js # HTTP MCP server on port 4105
`Development mode:
`bash
npm run dev # Webpack dev server for workbench UI
npm run start:mcp # MCP server backend
`$3
Add Semem to your Claude Desktop MCP configuration:
`json
{
"mcpServers": {
"semem": {
"command": "node",
"args": ["/path/to/semem/mcp/index.js"]
}
}
}
`Alternative setup (after cloning repository):
`json
{
"mcpServers": {
"semem": {
"command": "npm",
"args": ["run", "mcp"],
"cwd": "/path/to/semem"
}
}
}
`Then use the 7 Simple Verbs directly in Claude Desktop conversations!
π₯οΈ Workbench UI Features
$3
The modern web-based interface implementing the 7 Simple Verbs:Core Panels:
- Tell: Store content with lazy/immediate processing options
- Ask: Query knowledge with HyDE, Wikipedia, Wikidata, and Web Search enhancements
- Augment: Extract concepts and process lazy content
- Navigate: ZPT (Zoom/Pan/Tilt) knowledge space navigation with real-time feedback
- Inspect: Debug system state and session cache
- Console: Real-time log viewing with filtering and search
Key Features:
- Real-time ZPT State Display: Current zoom/pan/tilt settings with descriptions
- Visual Navigation Feedback: Execute navigation with loading states and results
- Session Statistics: Track interactions, concepts, and session duration
- Connection Status: Live backend connectivity monitoring
Development Access:
`bash
npm run start:workbench # Direct workbench server (port 8081)
npm run dev # Webpack dev server with hot reload (port 9000)
`$3
The original UI system includes VSOM visualization and SPARQL browser (via npm start).π Key Features
- π£οΈ Simple Verbs Interface: 7-verb natural language API (tell, ask, augment, zoom, pan, tilt, inspect) for intuitive semantic operations
- π§ Semantic Memory: Intelligent context retrieval and memory organization with vector embeddings and SPARQL
- πΈοΈ Knowledge Graph Processing: End-to-end Ragno pipeline for entity extraction and relationship modeling
- π― Zoom, Pan Tilt (ZPT): Knowledge navigation and processing with persistent state management
- π Model Context Protocol (MCP): JSON-RPC 2.0 API for seamless LLM and agent integration with workflow orchestration
- π MCP Prompts: 8 pre-built workflow templates for complex multi-step operations
- π Advanced Algorithms: HyDE, Web Search integration, VSOM, graph analytics, community detection, and Personal PageRank
- π Interactive Visualizations: VSOM (Vector Self-Organizing Maps) for high-dimensional data exploration
- π Multi-Provider LLM Support: Groq (default), Ollama, Claude, Mistral, and other providers via unified connector system
- π Multiple Storage Backends: In-memory, JSON, and SPARQL/RDF with caching optimization
π£οΈ Simple Verbs Interface
Semem features a 7-verb natural language interface that simplifies complex knowledge operations into conversational commands:
$3
| Verb | Purpose | Example |
|------|---------|---------|
| tell | Store information with automatic embeddings |
tell: "Machine learning uses neural networks" |
| ask | Query stored knowledge with semantic search | ask: "What is machine learning?" |
| augment | Extract concepts and enhance content | augment: {"target": "text to analyze", "operation": "concepts"} |
| zoom | Set abstraction level (entity/unit/text/community/corpus) | zoom: {"level": "entity"} |
| pan | Apply domain/temporal/keyword filtering | pan: {"domains": ["AI"], "keywords": ["neural networks"]} |
| tilt | Choose view perspective (keywords/embedding/graph/temporal) | tilt: {"style": "embedding"} |
| inspect | Debug and examine stored memories and session cache | inspect: {"what": "session", "details": true} |$3
`bash
Store knowledge
curl -X POST http://localhost:4105/tell \
-d '{"content": "The 7 Simple Verbs simplify semantic operations"}'Set context
curl -X POST http://localhost:4105/zoom -d '{"level": "entity"}'
curl -X POST http://localhost:4105/pan -d '{"domains": ["MCP"], "keywords": ["verbs"]}'Query with context
curl -X POST http://localhost:4105/ask \
-d '{"question": "What are the Simple Verbs?"}'
`The system maintains persistent ZPT state across operations, enabling contextual conversations with your knowledge base. All verbs work via REST API, MCP protocol, or direct SDK calls.
See docs/PROMPT.md for detailed usage instructions.
π Data Visualization
Semem includes an advanced VSOM (Vector Self-Organizing Map) visualization system for exploring high-dimensional data:
$3
- Interactive SOM grid visualization with zoom/pan
- Real-time training visualization
- Feature map exploration (U-Matrix, component planes)
- Interactive clustering of SOM nodes
- Responsive design for all screen sizes$3
1. Navigate to the VSOM tab in the Semem UI
2. Load or train a SOM model
3. Explore the visualization and interact with nodes
4. Use the feature maps to understand data relationships
For more details, see the VSOM Documentation.
π Project Structure
`
semem/
βββ src/ # Core library code
β βββ handlers/ # LLM and embedding handlers
β βββ stores/ # Storage backends (JSON, SPARQL, etc.)
β βββ connectors/ # LLM provider connectors
β βββ servers/ # HTTP server implementations
β βββ ragno/ # Knowledge graph algorithms
β βββ zpt/ # Zero-Point Traversal system
β βββ frontend/
β βββ workbench/ # Modern workbench UI (primary)
β β βββ public/ # Workbench source files
β β βββ server.js # Workbench development server
β βββ [legacy]/ # Legacy UI components
βββ dist/ # Build outputs
β βββ workbench/ # Built workbench application
βββ examples/ # Organized examples by category
β βββ basic/ # Core functionality examples
β βββ ragno/ # Knowledge graph examples
β βββ mcp/ # MCP integration examples
β βββ zpt/ # ZPT processing examples
β βββ pending/ # Work-in-progress examples
βββ mcp/ # MCP server implementation
β βββ http-server.js # Primary MCP HTTP server
β βββ tools/ # MCP tool implementations
β βββ prompts/ # MCP workflow templates
βββ config/ # Configuration files
βββ webpack.config.js # Workbench build configuration
βββ docs/ # Comprehensive documentation
`π Server Architecture
Semem provides a complete HTTP server infrastructure with a modern workbench-first approach:
$3
#### π§ MCP Server (
mcp/http-server.js)
The primary server providing both MCP protocol and Simple Verbs REST API:- Simple Verbs REST API: 7-verb natural language interface (tell, ask, augment, zoom, pan, tilt, inspect)
- MCP Protocol: JSON-RPC 2.0 API for LLM integration (32 tools + 15 resources + 8 prompts)
- Session Management: Persistent ZPT state across operations
- Error Handling: Robust error recovery and partial results
Key Simple Verbs Endpoints:
`
POST /tell # Store content with embeddings
POST /ask # Query knowledge with semantic search
POST /augment # Extract concepts and enhance content
POST /zoom # Set abstraction level
POST /pan # Apply domain/temporal filtering
POST /tilt # Choose view perspective
POST /inspect # Debug system state
GET /state # Get current ZPT state
`#### π₯οΈ Workbench (
src/frontend/workbench/)
Modern web-based interface implementing the 7 Simple Verbs:- Interactive Panels: Tell, Ask, Augment, Navigate, Inspect, Console
- Real-time Feedback: ZPT state display and navigation results
- Development Server: Hot reload with webpack dev server
- Session Tracking: Live statistics and connection monitoring
#### π₯ Legacy API Server (
api-server.js)
REST API server for backward compatibility:- Memory Operations: Store, search, and retrieve semantic memories
- Chat Interface: Conversational AI with context awareness
- Embedding Services: Vector embedding generation and management
- Configuration Management: Dynamic provider and storage configuration
#### π Server Manager (
server-manager.js)
Process management system for coordinating multiple server instances:- Process Lifecycle: Start, monitor, and gracefully stop servers
- Port Management: Automatic port conflict resolution
- Health Monitoring: Real-time process status tracking
- Signal Handling: Graceful shutdown coordination
- Logging: Centralized output management with timestamps
#### π― Start All (
start-all.js)
Orchestration script for launching the complete server ecosystem:- Configuration Loading: Unified config system integration
- Multi-Server Startup: Coordinated API and UI server launch
- Interactive Control: Keyboard shortcuts for shutdown (Ctrl+C, 'q')
- Error Handling: Robust startup failure recovery
$3
Recommended - Workbench Development:
`bash
Start MCP server (backend)
npm run start:mcp # MCP server (port 4105)Start workbench (frontend)
npm run start:workbench # Workbench server (port 8081)
OR for development with hot reload
npm run dev # Webpack dev server (port 9000)
`Legacy - Full Server Stack:
`bash
Start all legacy servers
./start.sh # API server (4100) + UI server (4120)
OR
npm startIndividual server startup
node src/servers/api-server.js # API only (port 4100)
node mcp/http-server.js # MCP server (port 4105)
`$3
Servers are configured via
config/config.json:`json
{
"servers": {
"api": 4100, # API server port
"ui": 4120, # UI server port
"redirect": 4110, # Optional redirect port
"redirectTarget": 4120
},
"storage": {
"type": "sparql", # or "json", "memory"
"options": { / storage-specific config / }
},
"llmProviders": [
{ / provider configurations / }
]
}
`π³ Docker Deployment
Semem provides comprehensive Docker support for both development and production deployments with a multi-service architecture.
> π Complete Docker Guide - Detailed installation, configuration, and troubleshooting instructions
$3
1. Production Deployment:
`bash
Clone the repository
git clone https://github.com/danja/semem.git
cd sememCopy your existing .env or create from template
cp .env.docker.example .env
Edit .env with your API keys (same format as local)
Start all services
docker compose up -dCheck service status
docker compose ps
docker compose logs -f semem
`2. Development Deployment:
`bash
Use your existing .env file (works as-is)
docker compose -f docker-compose.dev.yml up -dView logs
docker compose -f docker-compose.dev.yml logs -f semem-dev
`> βΉοΈ Need Help? See the Docker Guide for detailed installation steps, troubleshooting, and advanced configuration options.
$3
The Docker deployment includes the following services:
| Service | Description | Ports | Purpose |
|---------|-------------|-------|---------|
| semem | Main application container | 4100, 4101, 4102 | API, MCP, Workbench servers |
| fuseki | Apache Jena SPARQL database | 3030 | RDF/SPARQL storage backend |
| nginx | Reverse proxy (optional) | 80, 443 | SSL termination, load balancing |
$3
#### Production Environment (.env.docker)
`bash
Core Configuration
NODE_ENV=production
SEMEM_API_KEY=your-secure-api-keySPARQL Store Configuration
SPARQL_HOST=localhost # SPARQL server hostname (localhost for local, fuseki for Docker)
SPARQL_PORT=3030 # SPARQL server port (3030 for local, 4050 for Docker external access)
SPARQL_USER=admin
SPARQL_PASSWORD=your-secure-passwordLLM Provider API Keys (configure at least one)
MISTRAL_API_KEY=your-mistral-key
CLAUDE_API_KEY=your-claude-key
OPENAI_API_KEY=your-openai-key
NOMIC_API_KEY=your-nomic-key`#### Development Environment (.env.docker.dev)
The development environment uses simplified configuration with optional external API keys.
$3
Persistent Data:
-
fuseki_data: SPARQL database storage
- semem_data: Application data and cache
- semem_logs: Application logsConfiguration Management:
`bash
Production: mount configuration
./config/config.docker.json:/app/config/config.json:roDevelopment: live code editing
./src:/app/src:ro
./mcp:/app/mcp:ro
`$3
#### 1. Production with SSL (using nginx profile)
`bash
Generate SSL certificates
mkdir -p nginx/ssl
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout nginx/ssl/semem.key -out nginx/ssl/semem.crtStart with reverse proxy
docker compose --profile proxy up -d
`
#### 2. Multi-architecture builds
`bash
Build for multiple architectures
docker buildx build --platform linux/amd64,linux/arm64 -t semem:latest .
`$3
Service Health Checks:
`bash
Check all services
docker compose psIndividual service health
curl http://localhost:4100/health # API server
curl http://localhost:4102/health # Workbench
curl http://localhost:3030/$/ping # Fuseki
`Application Logs:
`bash
Follow all logs
docker compose logs -fSpecific service logs
docker compose logs -f semem
docker compose logs -f fuseki
`$3
#### Common Issues
1. Port conflicts:
`bash
Check port usage
sudo lsof -i :4100,4101,4102,3030,11434Modify ports in docker-compose.yml if needed
`2. Permission issues:
`bash
Fix volume permissions
sudo chown -R 1001:1001 ./data ./logs
`
3. SPARQL connection issues:
`bash
Check Fuseki status
curl -f http://localhost:3030/$/pingRestart Fuseki
docker compose restart fuseki
`#### Performance Tuning
Resource Limits:
`yaml
In docker-compose.yml
deploy:
resources:
limits:
memory: 4G
cpus: '2.0'
`JVM Settings for Fuseki:
`bash
Environment variable in docker-compose.yml
JVM_ARGS=-Xmx2g -Xms1g
`$3
Live Development with Docker:
`bash
Start development stack
docker compose -f docker-compose.dev.yml up -dMake changes to source code (auto-reloaded via volumes)
Rebuild only when dependencies change
docker compose -f docker-compose.dev.yml build semem-devDebug with logs
docker compose -f docker-compose.dev.yml logs -f semem-dev
`Development Tools:
`bash
Access development container
docker compose -f docker-compose.dev.yml exec semem-dev bashRun tests inside container
docker compose -f docker-compose.dev.yml exec semem-dev npm testDebug Node.js (port 9229 exposed)
Connect your IDE debugger to localhost:9229
`$3
Production Security:
- Use strong passwords for SPARQL_PASSWORD
- Generate secure SEMEM_API_KEY
- Keep API keys in secure environment files
- Use nginx with proper SSL configuration
- Regularly update base images
Network Security:
`bash
Production: restrict external access
Only expose necessary ports (80, 443 via nginx)
Keep internal services (fuseki) on internal network
`$3
Data Backup:
`bash
Backup Fuseki data
docker run --rm -v semem_fuseki_data:/data -v $(pwd):/backup alpine tar czf /backup/fuseki-backup.tar.gz /dataBackup application data
docker run --rm -v semem_semem_data:/data -v $(pwd):/backup alpine tar czf /backup/semem-data-backup.tar.gz /data
`Migration:
`bash
Export configuration
docker compose exec semem cat /app/config/config.json > config-backup.jsonMigrate to new deployment
1. Copy volumes or restore from backup
2. Update docker-compose.yml with new configuration
3. Start services
`$3
Development Mode:
`bash
Workbench development with hot reload
npm run dev # Webpack dev server (port 9000)
npm run start:mcp # MCP backend (port 4105)Legacy development
LOG_LEVEL=debug ./start.sh # Legacy servers with debug logging
`Production Deployment:
`bash
Build workbench for production
npm run build:workbenchStart production servers
NODE_ENV=production npm run start:mcpWith process management (PM2)
pm2 start mcp/http-server.js --name semem-mcp
pm2 start src/frontend/workbench/server.js --name semem-workbench
`$3
The server infrastructure includes comprehensive monitoring:
- Health Checks:
/api/health endpoint with component status
- Metrics: /api/metrics endpoint with performance data
- Process Monitoring: Real-time process status in server manager
- Graceful Shutdown: Proper cleanup on SIGTERM/SIGINT signalsβ‘ Quick Start
$3
`bash
Clone and install
git clone https://github.com/your-org/semem.git
cd semem
npm installConfigure environment
cp example.env .env
Edit .env with your API keys and settings
`$3
1. Groq API Key (recommended for fast cloud inference):
`bash
# Set your API key in .env
GROQ_API_KEY=your-groq-api-key
`2. Ollama (optional for local processing):
`bash
# Install required models
ollama pull qwen2:1.5b # For chat/text generation
ollama pull nomic-embed-text # For embeddings
`3. SPARQL Endpoint (core semantic storage):
`bash
# Using Docker
docker run -d --name fuseki -p 3030:3030 stain/jena-fuseki
`$3
`bash
Start HTTP API and UI servers
./start.shAccess web interface
open http://localhost:4120Test API endpoints
curl http://localhost:4100/api/health
`> π See Server Architecture section for detailed server documentation.
$3
`bash
Basic memory operations
node examples/basic/MemoryEmbeddingJSON.jsKnowledge graph processing
node examples/ragno/RagnoPipelineDemo.jsMCP server integration (32 tools + 15 resources + 8 prompt workflows)
npm run mcp # Start MCP server
node examples/mcp/SememCoreDemo.js # Core memory operations
node examples/mcp/RagnoCorpusDecomposition.js # Knowledge graphs
node examples/mcp/ZPTBasicNavigation.js # 3D navigationComplete ZPT suite (5 comprehensive demos)
node examples/mcp/ZPTBasicNavigation.js # Navigation fundamentals
node examples/mcp/ZPTAdvancedFiltering.js # Multi-dimensional filtering
node examples/mcp/ZPTUtilityTools.js # Schema and validation
node examples/mcp/ZPTPerformanceOptimization.js # Performance tuning
node examples/mcp/ZPTIntegrationWorkflows.js # Cross-system integrationMCP Prompts workflows (NEW!)
Start MCP server first: npm run mcp
Then use Claude Desktop or other MCP clients to execute:
- semem-research-analysis: Analyze research documents
- semem-memory-qa: Q&A with semantic memory
- ragno-corpus-to-graph: Build knowledge graphs from text
- semem-full-pipeline: Complete memory+graph+navigation workflows
`π§ Core Components
$3
- Vector embeddings for semantic similarity
- Context window management with intelligent chunking
- Multi-backend storage (JSON, SPARQL, in-memory)
- Intelligent retrieval with relevance scoring$3
- Corpus decomposition into semantic units and entities
- Relationship extraction and RDF modeling
- Community detection using Leiden algorithm
- Graph analytics (centrality, k-core, PageRank)$3
- Zoom/Pan/Tilt navigation paradigm
- Content chunking strategies (semantic, fixed, adaptive)
- Corpuscle selection algorithms
- Transformation pipelines for content processing$3
- 32 comprehensive tools covering all Semem capabilities
- 15 specialized resources for documentation and data access
- 8 MCP Prompts for workflow orchestration and multi-step operations
- Complete ZPT integration with 6 navigation tools
- Cross-system workflows combining Memory + Ragno + ZPT
- Standardized API for LLM integration with schema validation$3
Transform complex multi-step operations into simple, guided workflows:Memory Workflows:
-
semem-research-analysis - Research document analysis with semantic memory context
- semem-memory-qa - Q&A using semantic memory retrieval and context assembly
- semem-concept-exploration - Deep concept exploration through memory relationshipsKnowledge Graph Construction:
-
ragno-corpus-to-graph - Transform text corpus to structured RDF knowledge graph
- ragno-entity-analysis - Analyze and enrich entities with contextual relationships3D Navigation:
-
zpt-navigate-explore - Interactive 3D knowledge space navigation and analysisIntegrated Workflows:
-
semem-full-pipeline - Complete memory β graph β navigation processing pipeline
- research-workflow - Academic research document processing and insight generationKey Features:
- Multi-step Coordination: Chain multiple tools with context passing
- Dynamic Arguments: Type validation, defaults, and requirement checking
- Conditional Execution: Skip workflow steps based on conditions
- Error Recovery: Graceful handling of failures with partial results
- Execution Tracking: Unique execution IDs and detailed step results
π€ Advanced Algorithms
$3
Enhances retrieval by generating hypothetical answers using LLMs, with uncertainty modeling via ragno:maybe properties.`bash
node examples/ragno/Hyde.js
`$3
Provides real-time information access through DuckDuckGo web search, enabling queries about current events, recent developments, and time-sensitive information not available in static knowledge sources.`bash
Web search is integrated into the enhancement system
Available through Ask interface with useWebSearch option
curl -X POST http://localhost:4105/ask \
-H "Content-Type: application/json" \
-d '{"question": "Latest AI developments", "useWebSearch": true}'
`$3
Provides entity clustering and semantic organization with support for multiple topologies.`bash
node examples/ragno/VSOM.js
`$3
- K-core decomposition for dense cluster identification
- Betweenness centrality for bridge node discovery
- Community detection (Leiden algorithm)
- Personalized PageRank for semantic traversal`bash
node examples/ragno/AnalyseGraph.js
node examples/ragno/Communities.js
node examples/ragno/PPR.js
`π Examples Documentation
The
examples/ directory contains comprehensive demonstrations organized by functionality:- π§ Basic Examples (
examples/basic/): Core memory operations, embedding generation, search
- πΈοΈ Ragno Examples (examples/ragno/): Knowledge graph processing, entity extraction, RDF
- π MCP Examples (examples/mcp/): Complete MCP integration with 32 tools + 15 resources + 8 prompt workflows
- ZPT Suite: 5 comprehensive demos covering all ZPT navigation capabilities β
COMPLETE
- Memory Integration: Core semantic memory with context management
- Knowledge Graphs: Ragno corpus decomposition and RDF processing
- Cross-System Workflows: Advanced integration patterns
- π MCP Prompts: 8 workflow templates for orchestrating complex multi-step operations β
NEW!
- π― ZPT Examples (examples/zpt/): Content processing and navigationSee examples/README.md and examples/mcp/README.md for detailed documentation and usage instructions.
π§ Configuration
$3
JSON Storage (simple persistence):
`json
{
"storage": {
"type": "json",
"options": {
"filePath": "./data/memories.json"
}
}
}
`SPARQL Storage (semantic web integration):
`json
{
"storage": {
"type": "sparql",
"options": {
"query": "http://${SPARQL_HOST:-localhost}:${SPARQL_PORT:-3030}/semem/sparql",
"update": "http://${SPARQL_HOST:-localhost}:${SPARQL_PORT:-3030}/semem/update",
"data": "http://${SPARQL_HOST:-localhost}:${SPARQL_PORT:-3030}/semem/data",
"graphName": "http://hyperdata.it/content",
"user": "${SPARQL_USER}",
"password": "${SPARQL_PASSWORD}"
}
}
}
`$3
Semem supports environment variable substitution in configuration files using
${VARIABLE_NAME:-default} syntax:SPARQL Connection Variables:
-
SPARQL_HOST: SPARQL server hostname (defaults to localhost)
- SPARQL_PORT: SPARQL server port (defaults to 3030)
- SPARQL_USER: SPARQL database username
- SPARQL_PASSWORD: SPARQL database passwordUsage Examples:
`bash
Local development (uses defaults: localhost:3030)
npm startCustom SPARQL endpoint
export SPARQL_HOST=fuseki.example.com
export SPARQL_PORT=8080
npm startDocker automatically sets: SPARQL_HOST=fuseki, SPARQL_PORT=4050
docker compose up -d
`#### Session-Level Memory Cache
Semem implements a hybrid storage strategy that combines persistent storage with session-level caching for immediate semantic retrieval:
How it works:
-
tell operations store content in both persistent storage (SPARQL/JSON) AND a session-level cache
- ask operations search session cache first, then persistent storage, combining results by semantic similarity
- Immediate availability: Recently stored concepts are immediately available for retrieval within the same session
- Semantic similarity: Uses cosine similarity on embeddings for intelligent result rankingSession cache features:
- In-memory vector search with similarity caching for performance
- Concept tracking - maintains a set of all concepts from the session
- Debugging support - use
inspect tool to examine cache contents:
`bash
curl -X POST http://localhost:4105/inspect \
-H "Content-Type: application/json" \
-d '{"what": "session", "details": true}'
`This solves the common issue where
tell β ask operations couldn't find recently stored content due to indexing delays in persistent storage.$3
Configure multiple providers in
config/config.json:`json
{
"llmProviders": [
{
"type": "groq",
"apiKey": "${GROQ_API_KEY}",
"chatModel": "llama-3.1-8b-instant",
"priority": 1,
"capabilities": ["chat"]
},
{
"type": "mistral",
"apiKey": "${MISTRAL_API_KEY}",
"chatModel": "mistral-small-latest",
"priority": 2,
"capabilities": ["chat"]
},
{
"type": "ollama",
"baseUrl": "http://localhost:11434",
"chatModel": "qwen2:1.5b",
"embeddingModel": "nomic-embed-text",
"priority": 2,
"capabilities": ["chat", "embedding"]
}
]
}
`π MCP Integration
Semem implements Anthropic's Model Context Protocol (MCP) for seamless LLM integration:
$3
If you've installed Semem as an npm package, you can run the MCP server directly:
`bash
Install globally
npm install -g sememRun MCP server via npx (recommended)
npx semem mcpRun HTTP MCP server
npx semem mcp-http --port=3000Or if installed globally
semem mcp
semem mcp --transport http --port 3000
`$3
`bash
Start MCP server
npm run mcpConnect from Claude Desktop or other MCP clients
Server provides 32 tools + 15 resources + 8 prompt workflows covering all Semem capabilities
`$3
Add to your Claude Desktop MCP configuration:
`json
{
"mcpServers": {
"semem": {
"command": "npx",
"args": ["semem mcp"]
}
}
}
`Or for HTTP transport:
`json
{
"mcpServers": {
"semem": {
"command": "npx",
"args": ["semem mcp-http", "--port=3000"],
"env": {
"MCP_PORT": "3000"
}
}
}
}$3
- Memory Operations (5 tools): Store, retrieve, generate responses, embeddings, concepts
- Storage Management (6 tools): Backend switching, backup/restore, migration, statistics
- Context Management (4 tools): Context windows, configuration, pruning, summarization
- System Monitoring (4 tools): Configuration, metrics, health checks, system status
- Knowledge Graphs (8 tools): Ragno corpus decomposition, entity extraction, SPARQL, analytics
- ZPT Navigation (6 tools): 3D navigation, filtering, validation, schema, optimization$3
- Memory Workflows (3): Research analysis, memory Q&A, concept exploration
- Knowledge Graph (2): Corpus-to-graph, entity analysis
- 3D Navigation (1): Interactive exploration
- Integrated (2): Full pipeline, research workflow$3
- System Resources (7): Status, API docs, schemas, configuration, metrics
- Ragno Resources (4): Ontology, pipeline guide, examples, SPARQL templates
- ZPT Resources (4): Navigation schema, examples, concepts guide, performance optimizationπ§ͺ Testing
`bash
Run core tests
npm testRun LLM-dependent tests
npm run test:llmsGenerate coverage report
npm run test:coverageRun with specific test file
npm test -- tests/unit/Config.spec.js
`π οΈ Development
$3
`bash
Development - Workbench
npm run dev # Webpack dev server with hot reload (port 9000)
npm run start:workbench # Workbench server (port 8081)
npm run start:mcp # MCP server (port 4105)Build
npm run build # Full build (types + workbench)
npm run build:workbench # Build workbench for production
npm run build:dev # Development buildTesting
npm test # Run unit and integration tests
npm run test:unit # Run unit tests only
npm run test:coverage # Generate coverage report
npm run test:e2e # End-to-end testsDocumentation
npm run docs # Generate JSDoc documentationLegacy Servers
./start.sh # Start all legacy servers (API + UI)
node src/servers/api-server.js # API server only (port 4100)MCP Server
npm run mcp # Start MCP server (stdio mode)
npm run mcp:http # Start MCP HTTP server (port 4105)
`$3
Quick Start:
`bash
Terminal 1: Start MCP backend
npm run start:mcpTerminal 2: Start development server
npm run dev
Opens http://localhost:9000 with hot reload
`Architecture:
- Frontend: Webpack dev server (port 9000) with hot module replacement
- Backend: MCP server (port 4105) with Simple Verbs REST API
- Proxy: Webpack automatically proxies API calls to MCP server
- Build Output:
dist/workbench/ contains production buildKey Features:
- Hot Reload: Changes to workbench files trigger automatic browser refresh
- Source Maps: Full debugging support in development mode
- Module Aliases: Clean imports with
@workbench, @services, @components
- Live Backend: MCP server provides real semantic memory functionalityProduction Build:
`bash
npm run build:workbench # Creates optimized bundle in dist/workbench/
`$3
1. Place in appropriate category directory (
basic/, ragno/, mcp/, zpt/)
2. Follow naming convention: PascalCase.js
3. Include comprehensive documentation
4. Add error handling and cleanup
5. Update examples/README.mdπ Documentation
- Examples Documentation: Comprehensive examples guide
- API Documentation: REST API and SDK reference
- MCP Documentation: Model Context Protocol integration
- MCP Prompts Guide: Complete workflow orchestration guide
- MCP Prompts Examples: Real-world usage patterns
- Architecture Guide: System design and components
- Algorithm Documentation: Advanced algorithms guide
π Troubleshooting
$3
Ollama Connection:
`bash
Check Ollama status
ollama list
curl http://localhost:11434/api/tags
`SPARQL Endpoint:
`bash
Test connectivity
curl -X POST http://localhost:3030/dataset/query \
-H "Content-Type: application/sparql-query" \
-d "SELECT * WHERE { ?s ?p ?o } LIMIT 1"
`Memory Issues:
`bash
Increase Node.js memory limit
export NODE_OPTIONS="--max-old-space-size=4096"
`$3
Enable detailed logging:
`bash
LOG_LEVEL=debug node examples/basic/MemoryEmbeddingJSON.js
``1. Fork the repository
2. Create a feature branch
3. Add tests for new functionality
4. Update documentation
5. Submit a pull request
MIT License - see LICENSE for details.
- Documentation: docs/
- Examples: examples/
- MCP Server: mcp/
- Issue Tracker: GitHub Issues
---
Semem - Intelligent semantic memory for the AI age.