Local-first CLI tool for analyzing conversation transcripts and detecting memory drift
npm install memograph-cliMemory Drift Inspector for Conversational AI
Analyze conversation transcripts and detect when AI assistants lose context. Get a drift score, identify repetitions, forgotten preferences, and contradictions using AI-powered semantic analysis.
---
- What it does
- Why this exists
- Try it now
- Install
- Quickstart
- Using Memograph
- Interactive Mode
- CLI Mode
- Configuration
- Input Format
- Output
- Privacy & Security
- For Developers & Contributors
- Troubleshooting
- License
---
Memograph analyzes conversation transcripts to detect when AI assistants lose context or "forget" information:
- Detects repetitions: User forced to repeat themselves
- Finds session resets: Assistant language suggesting it forgot context
- Identifies forgotten preferences: User restating preferences
- Spots contradictions: Conflicting facts over time
- Calculates drift score (0-100) and token waste percentage
---
When building conversational apps, memory failures often look like:
- Users repeating preferences: "I already said I want Bangla…"
- The assistant resets context: "Let's start over…"
- The same question is asked multiple times because the assistant doesn't converge
- Contradictory facts creep in
Memograph CLI gives you a quick, local diagnostic before you rebuild prompts, memory layers, or retrieval logic.
---
Get started in one command:
``bash`
npx memograph-cli
This launches interactive mode with:
- Visual menu (arrow keys + Enter)
- Setup wizard for AI configuration
- Settings that persist across sessions
- Real-time progress indicators
---
Run without arguments for a guided experience with arrow key navigation:
`bash`
npx memograph-cli
Main features:
- Visual menu with ↑/↓ arrow key navigation
- Inspect transcripts: Enter file path → Choose output format → View results
- Manage settings: Configure once, settings persist in ~/.memograph/config.json
- Setup wizard: 5-step guided configuration for AI providers
Quick setup wizard:
1. Select provider category (Cloud/Aggregators/Local)
2. Choose specific provider (OpenAI, Anthropic, Ollama, etc.)
3. Configure base URL (if needed)
4. Enter API key (if required)
5. Select model
Keyboard shortcuts:
- ↑ / ↓ - Navigate optionsEnter
- - Select/confirmCtrl+C
- - Exit
---
Recommended for first-time users and quick analysis:
\\\bash
npx memograph-cli
\\\
Launches the interactive mode immediately. Configure your AI model on first run, and you're ready to analyze transcripts!
Best for regular use:
\\\bash
npm i -g memograph-cli
\\\
After installation, run from anywhere:
\\\bash
\\The package name is
memograph-cli and the command is memograph.$3
For contributors and local testing:
\
\\bash\\---
Quickstart
$3
Get started in 3 steps:
\
\\bash\\$3
Quick example:
1. Create a transcript file:
transcript.json
\
\\json\\2. Run inspect with flags:
\
\\bash\\Note: If you've configured settings in interactive mode, CLI commands automatically use those settings. You can override any setting with CLI flags.
---
$3
For scripting and automation, use the
inspect command directly:`bash
memograph-cli inspect -i transcript.json
`When to use CLI mode:
- Automation scripts and CI/CD pipelines
- Batch processing multiple files
- When you already know your settings
Pro tip: Configure settings once in interactive mode, then use CLI mode for automated workflows!
---
CLI inspect command:
`bash
memograph-cli inspect -i [--json] [--llm-model ]
`Common options:
-
-i, --input - Transcript file (required)
- --json - Output JSON instead of text
- --llm-model - Override model (e.g., gpt-4o)
- --llm-provider - Override provider (openai, anthropic)
- --max-messages - Limit messages processedExamples:
`bash
Basic usage (uses saved settings)
memograph-cli inspect -i transcript.jsonJSON output for scripts
memograph-cli inspect -i transcript.json --jsonUse different model
memograph-cli inspect -i transcript.json --llm-model gpt-4o
`For all options, run:
memograph-cli inspect --help---
Configuration
Easiest: Interactive Setup
`bash
npx memograph-cli
Select "Manage settings" → Follow wizard
Settings saved to ~/.memograph/config.json
`Alternative: Environment Variables
`bash
Create .env file
OPENAI_API_KEY=sk-your-key-here
LLM_MODEL=gpt-4o-mini
`Using Local Models (Ollama)
`bash
Install and start Ollama
brew install ollama
ollama pull llama3.2
ollama serveConfigure in interactive mode or use CLI flags
`Settings priority: CLI flags > Environment variables > Config file
---
Input Format
Provide a JSON file with conversation messages:
`json
{
"schema_version": "1.0",
"messages": [
{ "idx": 0, "role": "user", "content": "Hello" },
{ "idx": 1, "role": "assistant", "content": "Hi!" }
]
}
`Required fields:
-
role: "user", "assistant", "system", or "tool"
- content: Message textOptional fields:
-
idx: Message index (auto-assigned if missing)
- ts: ISO timestamp
- tokens: Token count (estimated if missing)---
Output
Text output (default): Human-readable report with drift score, events, and extracted facts.
JSON output (
--json flag): Machine-readable format for scripts and CI/CD.`json
{
"drift_score": 25,
"token_waste_pct": 7.1,
"events": [...],
"should_have_been_memory": [...]
}
`---
Privacy & Security
Your data stays local:
- Memograph reads transcript files from your local filesystem
- Only sends data to LLM APIs for analysis (or uses local models)
- No data is stored or transmitted elsewhere
API Key Safety:
- Keys are stored in
~/.memograph/config.json or environment variables
- Never commit API keys to git (add .env to .gitignore)
- Use local models (Ollama) to avoid sending data to external APIs---
For Developers & Contributors
Interested in contributing or understanding how Memograph works? Check out CONTRIBUTING.md for:
- How it works: Detection algorithms, scoring, performance optimizations
- Development setup: Local environment, project structure, testing
- Roadmap: Planned features and improvements
- Publishing: Guidelines for releasing new versions
---
Troubleshooting
$3
"API key not found"
- Run
npx memograph-cli and use "Manage settings" → "Set/Update API Key"
- Or set environment variable: export OPENAI_API_KEY=sk-...Interactive mode doesn't start
- Don't pass any arguments (they trigger CLI mode)
- Ensure terminal supports ANSI colors and arrow keys
Settings not saving
- Settings are in
~/.memograph/config.json
- Reset with: rm ~/.memograph/config.json && npx memograph-cliOllama not working
- Ensure Ollama is running:
ollama serve
- Use correct URL: http://localhost:11434/v1
- Install model: ollama pull llama3.2Network/API errors
- Check internet connection
- Verify API status (status.openai.com / status.anthropic.com)
- Try a different model or use local models
Where are settings stored?
- Location:
~/.memograph/config.json
- View: cat ~/.memograph/config.json`Settings priority: CLI flags > Environment variables > Config file
---
MIT License - see LICENSE file for details.