Ultra-fast contextual memory for AI agents. Graph + Vector + Temporal in FalkorDB.
npm install kontext-tsUltra-fast contextual memory for AI agents. Built on FalkorDB.
- š Fast - Sub-30ms context retrieval
- š§ Automatic - LLM-powered entity and relationship extraction
- ā° Temporal - Track when facts become true or false
- š Simple - Three methods: add(), search(), getContext()
- š Multi-tenant - Isolated memory per user/agent/session
``bashStart FalkorDB
docker run -p 6379:6379 falkordb/falkordb
`typescript
import { Kontext } from 'kontext-ts';const kontext = new Kontext({
llm: { provider: 'gemini' }
});
// Add memory
await kontext.add('My name is Alice and I work at Acme Corp', {
userId: 'alice'
});
// Retrieve context
const context = await kontext.getContext('Tell me about Alice', {
userId: 'alice'
});
// ā "Alice works at Acme Corp"
`Why Kontext?
| Feature | Traditional RAG | Kontext |
|---------|-----------------|---------|
| Storage | Vector DB only | Graph + Vector + Text |
| Relationships | ā | ā
Automatic extraction |
| Temporal | ā | ā
Built-in |
| Latency | 50-200ms | < 30ms |
Installation
`bash
npm install kontext-ts
or
bun add kontext-ts
`$3
- Node.js 20+
- FalkorDB (via Docker)
- LLM API key (Gemini, OpenAI, Anthropic, or Ollama)
Configuration
`typescript
const kontext = new Kontext({
// FalkorDB connection (optional, defaults shown)
falkordb: {
host: 'localhost',
port: 6379,
},
// LLM provider (required)
llm: {
provider: 'gemini', // 'gemini' | 'openai' | 'anthropic' | 'ollama'
model: 'gemini-2.5-flash',
// apiKey: auto-detected from environment
},
});
`$3
`bash
Gemini (default)
GOOGLE_API_KEY=your-keyOpenAI
OPENAI_API_KEY=your-keyAnthropic
ANTHROPIC_API_KEY=your-keyOllama (no key needed)
OLLAMA_BASE_URL=http://localhost:11434
`API
$3
Add messages to memory. Automatically extracts entities and relationships.
`typescript
// From string
await kontext.add('I love pizza', { userId: 'bob' });// From messages
await kontext.add([
{ role: 'user', content: 'Book a table for 2' },
{ role: 'assistant', content: 'Done!' }
], { userId: 'bob' });
// Async (fire-and-forget)
await kontext.add(messages, { userId: 'bob', async: true });
`$3
Search memory for relevant facts and relationships.
`typescript
const results = await kontext.search('food preferences', {
userId: 'bob'
});// {
// facts: ['Bob loves pizza'],
// relations: [{ source: 'Bob', relation: 'LIKES', target: 'Pizza', ... }],
// entities: [{ name: 'Bob', type: 'Person', ... }],
// score: 0.95
// }
`$3
Get formatted context string for agent prompts.
`typescript
const context = await kontext.getContext('Help Bob order food', {
userId: 'bob'
});// Use in your agent
const prompt =
You are a helpful assistant.User: ${userMessage};`
Delete all memory for a user/agent/session.
`typescript`
await kontext.delete({ userId: 'bob' });
Memory is isolated by userId, agentId, or sessionId:
`typescript
// User memory (persistent)
await kontext.add(msg, { userId: 'alice' });
// Agent memory (shared across users)
await kontext.add(msg, { agentId: 'support-bot' });
// Session memory (temporary)
await kontext.add(msg, { sessionId: 'sess-123' });
`
`bash`
bun run examples/chat.ts
`bash`
bun run examples/basic.ts
`bash`
bun run examples/hotel-agent.ts
``
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā KONTEXT ā
ā ā
ā add() search() getContext() ā
ā ā ā ā ā
ā ā¼ ā¼ ā¼ ā
ā āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā ā
ā ā MEMORY ENGINE ā ā
ā ā Extract ā Store ā Search ā ā
ā āāāāāāāāāāāāāāāā¬āāāāāāāāāāāāāāāāāāā ā
ā ā ā
ā āāāāāāāāāāāāāāāā¼āāāāāāāāāāāāāāāāāāā ā
ā ā FALKORDB ā ā
ā ā Entities + Edges + Episodes ā ā
ā āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā ā
ā ā
ā āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā ā
ā ā LLM PROVIDERS ā ā
ā ā Gemini ā OpenAI ā Anthropic ā ā
ā āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā ā
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
| Operation | Latency |
|-----------|---------|
| search() | ~20-30ms |getContext()
| | ~25-35ms |add()
| (sync) | ~4-8s (LLM extraction) |add()` (async) | ~2ms (fire-and-forget) |
|
MIT