Model Context Protocol (MCP) server with 146 tools for code analysis, AI agents, git operations, multi-agent coordination, and kanban task management
npm install kemdicode-mcp

146 tools • 8 LLM providers • cluster bus • cognition layer • multi-agent orchestration • kanban • project memory
---
kemdiCode MCP is a Model Context Protocol server that gives AI agents and IDE assistants access to 146 specialized tools for code analysis, generation, git operations, file management, AST-aware editing, project memory, cognition & self-improvement, multi-board kanban with subtasks, multi-agent coordination, cluster bus with distributed LLM magistrale, typed data flow bus, structured output, and LLM-driven task management.

> Deep Tool Improvements — 3 new tools: agent-init (5-step agent onboarding), task-subtask (parent-child task hierarchy with cascade delete), board-workflow (custom workflow columns). Cycle detection via DFS in task dependency graphs. TF-IDF + bigram similarity replaces Jaccard for intent drift detection. Pipeline conditional branching with evaluateCondition(). AsyncLocalStorage for automatic session ID propagation. agent-watch removed (covered by monitor + agent-history). Pagination (offset) for task-list, list-memories, agent-list. Silent mode for all project tools. Cluster bus fixes: SCAN replaces blocking KEYS, fan-in aggregator cleanup on replacement. 146 tools, 649 tests passing.
> Tool UX & 12-Bug Audit — Cognition tools no longer require sessionId (auto-detected). New git-tag tool. git-log gains format:"json". file-search adds mode enum. 12-bug security audit fixing prototype pollution, infinite recursion, HMAC integrity, and more.
> CI/CD Multicast & Fan-In Aggregation — 6 CI signal types, fan-out to multiple clusters, fan-in result aggregation (all/first/majority/custom modes). Meta-router CI routing rules. Improved agent orchestration limits.
Table of Contents
- Cognition Layer: How AI Remembers
- Usage Examples
- What's Next
- Highlights
- Compatibility
- Quick Start
- IDE Configuration
- Multi-Provider LLM
- Tool Reference
- Architecture _(collapsible: system overview, 3-layer bus, cluster bus, data flow)_
- Multi-Agent Orchestration
- Multi-Model Consensus
- Kanban Task Management
- Recursive Tool Invocation
- CLI Reference
- Development
- Authors
- License
---
1.25.0 — Cluster Bus & LLM Magistrale
- Full-duplex inter-cluster communication via Redis Pub/Sub with typed signal envelopes
- 12 signal types, 3 send modes (unicast/broadcast/routed)
- Signal Flow Controller with backpressure, rate limiting, priority filtering
- Health Monitor with heartbeat tracking and stale detection
- LLM Magistrale: dispatch prompts across cluster nodes (4 strategies: first-wins, best-of-n, consensus, fallback-chain)
- Self-regulating Pass Controller (3 strategies: min-passes, quality-target, fixed)
- enhance-prompt tool for iterative prompt refinement
- Data Flow Bus: 12 typed channels with Zod schemas, correlation tracking, priority routing, Redis bridge
- Hardening: bloom filter dedup, circuit breaker, HMAC auth
- 559 unit tests. Read the full whitepaper →
1.24.0 — Structured Output, Perplexity, Task Clustering
- generateObject() with Zod schema validation, retry logic, and automatic JSON repair via jsonrepair
- 8th LLM provider: Perplexity for research-tier queries (3-layer routing)
- Tool Annotations: all tools carry MCP-level hints (readOnlyHint, destructiveHint, openWorldHint)
- task-cluster with 11 actions for LLM-driven task grouping
- task-complexity: LLM-scored 1–10 analysis with subtask recommendations
- Data Flow Bus: typed message bus with 12 channels
- Global Event Bus with Redis Pub/Sub bridge
- MCP Client Capabilities: client-sampling, client-elicit, client-roots
- agent-orchestrate for autonomous AI agent loops
- Ambient Learning & Agent Ranking (bronze → diamond tiers)
- session-recover: single-tool context restore after compaction
- executeWithGuard() deduplication: −622 lines across 262 files
1.23.0 — Cognition Layer & Cross-Tool Intelligence
- 8 interconnected cognition tools: decision-journal, confidence-tracker, mental-model, intent-tracker, error-pattern, self-critique, smart-handoff, context-budget
- In-process event bus with 9 reactive handlers (decision → confidence, error → fix lookup, drift → critique)
- CognitionCrossLinker for bidirectional Redis links between cognition records
- self-critique → check-application action; mental-model → impact-analysis, dependency-chain, invariant-check
- smart-handoff auto-enriched with full cognition snapshot
1.22.0 — Code Quality Modernization
- console → Logger migration across 14 files (~70 call sites)
- ESLint warnings fixed, version header corrected
1.21.0 — Thinking Chain
- thinking-chain tool with 7 actions, forward-only constraint, branching, Redis-backed with 7-day TTL
1.20.0 — 14 New Tools + Task Comments
- git-add, git-commit, git-stash, task-get, task-delete, task-comment, board-delete, workspace-delete, file-delete, file-move, file-copy, file-backup-restore, pipeline, checkpoint-diff
- Metadata for all tools, auto-sessionId, board/workspace name lookup
---
The cognition layer gives agents persistent self-awareness across sessions. As the agent works, it writes structured records to Redis — decisions, confidence levels, error patterns, intent hierarchies, and lessons learned.
During a session: The agent records intents, logs decisions with reasoning, tracks confidence, and matches errors against its cross-session database. At the end, self-critique extracts lessons and smart-handoff creates a structured briefing auto-enriched with a full cognition snapshot.
New session: The agent calls smart-handoff:latest (or session-recover) and gets back the intent hierarchy, approach rationale, status, warnings, lessons, and the single most important next action — no re-explanation needed.
Cross-tool intelligence: Tools react to each other through a global event bus. Recording a decision auto-creates a confidence record. Low confidence triggers drift detection. Errors scan recent decisions. Lessons cross-link to matching error patterns. All backed by CognitionCrossLinker with bidirectional Redis links.
Data lives in Redis with configurable TTL (default 7 days). Nothing is sent to external services.
---
You don't call these tools directly — your AI agent (Claude Code, Cursor, etc.) invokes them when you describe what you need. Here are real prompts and what happens behind the scenes:
Code review before committing:
```
You: "Review the auth module for security issues"
→ Agent calls: code-review --files "@src/auth/*/.ts" --focus "security"
Fix a bug with AI assistance:
``
You: "There's a race condition in the queue processor, find and fix it"
→ Agent calls: fix-bug --description "race condition in queue processor" --files "@src/queue/"
Multi-model comparison for architecture decisions:
``
You: "Ask 3 models whether we should use event sourcing or CRUD for the order service"
→ Agent calls: consensus-prompt \
--prompt "Event sourcing vs CRUD for an order management service with 10k orders/day" \
--boardModels '["o:gpt-5","a:claude-sonnet-4-5","g:gemini-3-pro"]' \
--ceoModel "a:claude-opus-4-5:4k"
Project memory for persistent context:
`
You: "Remember that we use JWT with RS256 for auth in this project"
→ Agent calls: write-memory --name "auth-strategy" --content "JWT with RS256, keys in /etc/keys/" --tags '["auth","architecture"]'
You: "What was our auth strategy?"
→ Agent calls: read-memory --name "auth-strategy"
`
Multi-agent task distribution:
``
You: "Set up 3 agents: backend, frontend, QA. Backend works on the API, frontend on React components"
→ Agent calls: agent-register → task-create → task-push-multi
→ Agents coordinate via shared-thoughts and queue-message
---
`bash`
npm install -g kemdicode-mcp
Then add to your AI IDE:
`bashClaude Code
claude mcp add kemdicode-mcp -- kemdicode-mcp
$3
kemdiCode MCP works best when you tell the agent to use it. Add a line to your project's
CLAUDE.md, .cursorrules, or system prompt:`
You have access to kemdiCode MCP server. Use its tools for:
- Project memory (write-memory, read-memory) to persist decisions across sessions
- Cognition tools (decision-journal, smart-handoff) to track your reasoning
- Kanban (task-create, task-list) for project management
- Code analysis (code-review, find-definition) for deep code understanding
`$3
`
You: "Build a landing page for a SaaS product. Use kemdiCode tools to track progress
and remember design decisions."What the agent does:
1. write-memory --name "landing-design" → saves design system choices
2. decision-journal → records "chose Tailwind over CSS modules" with reasoning
3. task-create → creates tasks: hero section, pricing, testimonials, footer
4. code-review → reviews each component for accessibility
5. smart-handoff → creates handoff so next session can continue seamlessly
`$3
`
You: "Build a Flappy Bird clone in Kotlin for Android. Track architecture decisions
and use the kanban board."What the agent does:
1. intent-tracker → sets mission "Flappy Bird Android clone"
2. mental-model → maps architecture: GameView, Bird, Pipe, ScoreManager, GameLoop
3. board-create → creates "Flappy Bird Sprint 1"
4. task-create → physics engine, rendering, collision detection, scoring, sounds
5. decision-journal → records "chose Canvas over OpenGL" (simpler for 2D, faster iteration)
6. error-pattern → when bitmap loading fails, records fix for next time
7. self-critique → "physics feels floaty, adjust gravity constant next session"
8. smart-handoff → full briefing for the next session with all context
`The agent doesn't just write code — it builds a persistent understanding of your project that survives across sessions, compactions, and context resets.
---
Highlights
| Capability | Description |
|:-----------|:------------|
| 146 MCP Tools | Code review, refactoring, testing, git, file management, AST editing, memory, checkpoints, kanban with subtasks, cognition, cluster bus, data flow, pipelines, structured output, task clustering |
| Cluster Bus | Distributed LLM orchestration: 18 signal types, 4 send modes (incl. multicast), magistrale with 4 aggregation strategies, multi-pass quality control, CI/CD fan-in |
| Data Flow Bus | 12 typed channels (
ai:, kanban:, cognition:, agent:, system:*) with Zod schemas, correlation tracking, Redis bridge |
| Cognition Layer | 8 self-improvement tools: decision journal, confidence tracking, mental models, intent hierarchy with TF-IDF drift detection, error patterns, self-critique, smart handoff, context budget |
| Cross-Tool Intelligence | Global event bus + cross-linker: tools react to each other across cognition, kanban, session, and recursive modules |
| 8 LLM Providers | Native SDKs for OpenAI, Anthropic, Gemini + OpenAI-compatible for Groq, DeepSeek, Ollama, OpenRouter, Perplexity |
| Multi-Agent | Agent onboarding (agent-init), ranking (bronze→diamond), coordination via kanban boards and Redis Pub/Sub |
| Structured Output | generateObject() with Zod schemas, JSON repair, and retry logic for reliable LLM-to-data extraction |
| Parallel Multi-Model | Send one prompt to N models simultaneously; CEO-and-Board consensus pattern |
| Thinking Tokens | Unified syntax across providers: o:gpt-5:high • a:claude-sonnet-4-5:4k • g:gemini-3-pro:8k |
| Tree-sitter AST | Language-aware navigation and symbol editing for 19 languages |
| Project Memory | Persistent per-project key-value store with TTL and tags |
| Session Resurrection | loci-recall + smart-handoff restore full context after compaction |
| Hot Reload | Change provider, model, or config at runtime without restart |
| Cross-Runtime | Runs on Bun (recommended) or Node.js with automatic detection |---
Compatibility
| IDE / Editor | Status | Config location |
|:-------------|:------:|:----------------|
| Claude Code | ✅ |
claude mcp add or ~/.claude.json |
| Cursor | ✅ | Settings → Features → MCP |
| KiroCode | ✅ | ~/.kirocode/mcp.json |
| RooCode | ✅ | VS Code extension settings |---
Quick Start
$3
- Bun ≥ 1.0 _(recommended)_ or Node.js ≥ 18
- Redis _(optional — required only for multi-agent features and cognition layer)_
$3
`bash
git clone https://github.com/kemdi-pl/kemdicode-mcp.git
cd kemdicode-mcp
bun install && bun run build:bun
bun run start:bun
`
Node.js alternative
`bash
npm install && npm run build && npm run start
`---
IDE Configuration
Claude Code
`bash
claude mcp add kemdicode-mcp -- bun /path/to/kemdicode-mcp/dist/index.js
`Or add to
~/.claude.json:`json
{
"mcpServers": {
"kemdicode-mcp": {
"command": "bun",
"args": ["/path/to/kemdicode-mcp/dist/index.js"]
}
}
}
`
Cursor
Settings → Features → MCP:
`json
{
"mcpServers": {
"kemdicode-mcp": {
"command": "bun",
"args": ["/path/to/kemdicode-mcp/dist/index.js", "-m", "gpt-5"]
}
}
}
`
KiroCode
Add to
~/.kirocode/mcp.json:`json
{
"mcpServers": {
"kemdicode-mcp": {
"command": "bun",
"args": [
"/path/to/kemdicode-mcp/dist/index.js",
"-m", "claude-sonnet-4-5",
"--redis-host", "127.0.0.1"
]
}
}
}
`
RooCode
Add to VS Code settings (RooCode extension):
`json
{
"mcpServers": {
"kemdicode-mcp": {
"command": "bun",
"args": [
"/path/to/kemdicode-mcp/dist/index.js",
"-m", "claude-sonnet-4-5",
"--redis-host", "127.0.0.1"
]
}
}
}
`---
Multi-Provider LLM
kemdiCode MCP ships with 8 built-in providers. Each can be activated by setting the corresponding API key:
`bash
export OPENAI_API_KEY=sk-... # OpenAI
export ANTHROPIC_API_KEY=sk-ant-... # Anthropic
export GEMINI_API_KEY=AI... # Google Gemini
export GROQ_API_KEY=gsk_... # Groq
export DEEPSEEK_API_KEY=sk-... # DeepSeek
export OPENROUTER_API_KEY=sk-or-... # OpenRouter
export PERPLEXITY_API_KEY=pplx-... # Perplexity (research tier)
Ollama — no key required (local)
`$3
Use
provider:model (or the short alias) anywhere a model is accepted:`
openai:gpt-5 o:gpt-5 # Latest flagship model
anthropic:claude-sonnet-4-5 a:claude-sonnet-4-5 # Best balance
anthropic:claude-opus-4-5 a:claude-opus-4-5 # Maximum intelligence
gemini:gemini-3-pro g:gemini-3-pro # Most intelligent
groq:llama-3.3-70b q:llama-3.3-70b # Fast inference
deepseek:deepseek-chat d:deepseek-chat # Cost effective
ollama:llama3.3 l:llama3.3 # Local deployment
openrouter:gpt-5 r:gpt-5 # Aggregator access
perplexity:sonar-pro p:sonar-pro # Research queries
`$3
Append a third segment to enable extended thinking:
| Provider | Syntax | Effect |
|:---------|:-------|:-------|
| OpenAI (reasoning) |
o:gpt-5:high | Sets reasoning_effort to low / medium / high |
| Anthropic | a:claude-sonnet-4-5:4k | Allocates 4 096 extended thinking tokens |
| Gemini | g:gemini-3-pro:8k | Allocates 8 192 thinking tokens |---
Tool Reference
> 146 tools across 23 categories.
| Category | # | Tools |
|:---------|:-:|:------|
| Cluster Bus | 7 |
cluster-bus-status cluster-bus-topology cluster-bus-send cluster-bus-magistrale cluster-bus-flow cluster-bus-routing cluster-bus-inspect |
| Cognition | 8 | decision-journal confidence-tracker mental-model intent-tracker error-pattern self-critique smart-handoff context-budget |
| AI Agents | 4 | plan build brainstorm ask-ai |
| Multi-LLM | 3 | multi-prompt consensus-prompt enhance-prompt |
| Code Analysis | 8 | code-review explain-code find-definition find-references find-symbols semantic-search code-outline analyze-deps |
| Line Editing | 4 | insert-at-line delete-lines replace-lines replace-content |
| Symbol Editing | 3 | insert-before-symbol insert-after-symbol rename-symbol |
| Code Modification | 5 | fix-bug refactor auto-fix auto-fix-agent write-tests |
| Project Memory | 8 | write-memory read-memory list-memories edit-memory delete-memory checkpoint-save checkpoint-restore checkpoint-diff |
| Git | 9 | git-status git-diff git-log git-blame git-branch git-add git-commit git-stash git-tag |
| File Operations | 9 | file-read file-write file-search file-tree file-diff file-delete file-move file-copy file-backup-restore |
| Project | 5 | project-info run-script run-tests run-lint check-types |
| Kanban — Tasks | 13 | task-create task-get task-list task-update task-delete task-comment task-claim task-assign task-push-multi task-subtask board-status task-cluster task-complexity |
| Kanban — Workspaces | 5 | workspace-create workspace-list workspace-join workspace-leave workspace-delete |
| Kanban — Boards | 7 | board-create board-list board-share board-members board-invite board-delete board-workflow |
| Recursive | 4 | invoke-tool invoke-batch invocation-log agent-orchestrate |
| Multi-Agent | 14 | agent-init agent-list agent-register agent-alert agent-inject agent-history monitor agent-summary agent-rank queue-message shared-thoughts get-shared-context feedback batch |
| Orchestration | 1 | pipeline |
| Session | 6 | session-list session-info session-create session-switch session-delete session-recover |
| MCP Client | 3 | client-sampling client-elicit client-roots |
| Knowledge Graph | 4 | graph-query graph-find-path loci-recall sequence-recommend |
| Thinking Chain | 1 | thinking-chain |
| MPC Security | 4 | mpc-split mpc-distribute mpc-reconstruct mpc-status |
| RL Learning | 2 | rl-reward-stats rl-dopamine-log |
| System | 8 | env-info memory-usage ai-config ai-models tool-health config ping help |---
Architecture
System Overview
| Layer | Component | Description |
|:------|:----------|:------------|
| Clients | Claude Code, Cursor, KiroCode, RooCode | Connect via SSE + JSON-RPC (MCP Protocol) |
| HTTP Server |
:3100 (Bun or Node.js) | Routes: /sse, /message, /resume, /stream |
| Session Manager | Per-client isolation | CWD injection, activity tracking, SSE keep-alive |
| Tool Registry | 146 tools, 23 categories | Zod schema validation, tool annotations, lazy loading |
| Cluster Bus | Distributed signal bus | Full-duplex inter-cluster signals via Redis Pub/Sub |
| Data Flow Bus | 12 typed channels | Zod schemas, correlation tracking, priority routing |
| Cognition Layer | Global event bus + cross-linker | 9 reactive handlers, bidirectional Redis links |
| Provider Registry | 8 LLM providers | Native SDKs + OpenAI-compatible. Hot-reload, structured output |
| Tree-sitter AST | 19 languages | WASM parsers, symbol navigation, rename, insert |
| Runtime | Bun / Node.js | Auto-detection, unified HTTP, process, crypto |
| Redis (DB 2) | Shared state | mcp:context:, mcp:agents:, mcp:kanban:, mcp:memory:, mcp:cognition:* |→ Full diagram: docs/architecture-overview.md
3-Layer Bus Architecture
The server uses a 3-layer bus with 3 independent Redis paths and anti-amplification bridges:
`
+====================================================================+
|| L3: ClusterBus (Redis Pub/Sub, mcp:cluster:*) ||
|| ||
|| 18 signal types | 4 send modes (unicast/broadcast/routed/mcast) ||
|| SignalFlowCtrl | MetaRouter | HealthMonitor | FanInAggregator ||
|| ||
|| +---------------------+ +------------------------+ ||
|| | EventBridge L3<>L1 | | DataFlowBridge L3<>L2 | ||
|| | hop limit = 5 | | hop limit = 5 | ||
|| +---------------------+ +------------------------+ ||
+====================================================================+
|| L2: DataFlowBus (in-process + Redis mcp:dataflow:{channel}) ||
|| ||
|| ai:completion | kanban:task-change | cognition:decision ||
|| ai:structured | kanban:complexity | cognition:intent ||
|| ai:research | | cognition:error ||
|| agent:status | agent:message | system:health | system:cfg ||
|| ||
|| DataFlowEnvelope: correlation, priority 0-3, TTL, Zod schemas ||
+====================================================================+
|| L1: GlobalEventBus (in-process + Redis mcp:events:{type}) ||
|| ||
|| namespaced events | async queueMicrotask | max chain depth = 8 ||
|| CognitionEventBus wrapper (auto-prefix "cognition:") ||
+====================================================================+
|
Module Handlers: cognition (9) | kanban (2) | loop (2)
`→ Full documentation: docs/architecture-3-layer-bus.md
Cluster Bus & Magistrale
Distributed LLM orchestration across cluster nodes with typed signals and multi-pass quality control.
`bash
Register a cluster node
cluster-bus-topology --action "register" --clusterId "backend-ai" \
--clusterName "Backend LLM" --capabilities '["typescript","code-review"]' \
--metaTags '["role:worker","tier:pro"]'Magistrale: dispatch to multiple clusters, pick best result
cluster-bus-magistrale --prompt "Design a rate limiter" --strategy "best-of-n" \
--maxTargets 3 --timeoutMs 60000 --minResponses 2 --qualityThreshold 0.85 \
--passStrategy "quality-target" --maxPasses 5
`Magistrale strategies:
first-wins • best-of-n • consensus • fallback-chainPass strategies:
min-passes • quality-target • fixed→ Full guide: examples/08-cluster-bus-magistrale.md
Data Flow Bus
12 typed channels for structured inter-module communication with Zod schemas and correlation tracking.
Automatic flows:
-
ask-ai → ai:completion → cognition subscribes → logs decision context
- task-update → kanban:task-change → agent subscribes → notifies assignee
- error detected → cognition:error → error-pattern DB → suggests fix
- tool-health → system:health → monitor → alerts on degradationEvery message follows
DataFlowEnvelope: unique ID, correlation chain, priority (0–3), TTL, Zod-validated payload. Redis bridge for cross-session sync.→ Full guide: examples/09-dataflow-bus.md
---
Multi-Agent Orchestration
Register agents, distribute work across kanban boards, and coordinate via Redis Pub/Sub:
`bash
Quick onboarding — register, count tasks, claim, summarize, set alerts
agent-init --sessionId "sess-1" --agentName "backend-dev" --role "worker" \
--capabilities '["typescript","postgresql"]' --boardId "sprint-1"Or manual registration
agent-register --agents '[
{"id":"backend","role":"backend","capabilities":["typescript","postgresql"]},
{"id":"frontend","role":"frontend","capabilities":["react","tailwind"]},
{"id":"qa","role":"quality","capabilities":["jest","cypress"]}
]'Distribute tasks
task-push-multi --taskIds '["api-1","api-2"]' --agents '["backend"]' --mode assignBroadcast a requirement
queue-message --broadcast true --message "Use OpenAPI 3.0 spec" --priority highReal-time monitoring
monitor --view hierarchy
`---
Multi-Model Consensus
Send one prompt to N models in parallel, then let a CEO model synthesize:
`bash
CEO-and-Board consensus
consensus-prompt \
--prompt "Redis vs PostgreSQL for sessions?" \
--boardModels '["o:gpt-5", "a:claude-sonnet-4-5", "g:gemini-3-pro"]' \
--ceoModel "a:claude-opus-4-5:4k"
`All board models run via
Promise.allSettled() — individual failures never block the others.---
Kanban Task Management
`bash
Create a workspace
workspace-create --name "Project Alpha"Add boards with custom workflow
board-create --name "Backend Sprint 1" --workspaceId
board-workflow --boardId --action set --columns '["backlog","dev","review","qa","done"]'Batch-create tasks with subtasks
task-create --tasks '[
{"title":"Auth API","priority":"high","boardId":""},
{"title":"Rate limiter","priority":"medium","boardId":""}
]'
task-subtask --action create --parentTaskId "t-1" --title "JWT validation" --priority "high"Push to agents
task-push-multi --taskIds '["t-1","t-2"]' --agents '["agent-1"]' --mode assign
`Features: workspaces • multiple boards • custom workflow columns • parent-child subtasks with cascade delete • dependency cycle detection • role-based access • batch ops (1-20 per call) • assign / clone / notify • append-only task comments • pagination with offset
---
Recursive Tool Invocation
Sub-agents can invoke other tools with built-in safety limits (max depth 2, rate-limited):
`bash
invoke-batch --invocations '[
{"tool":"file-read","args":{"path":"@src/index.ts"}},
{"tool":"run-tests","args":{}}
]' --mode parallel
`---
CLI Reference
`bash
bun dist/index.js [options]
`| Flag | Default | Description |
|:-----|:-------:|:------------|
|
-m, --model | — | Primary AI model |
| -f, --fallback-model | — | Fallback on quota / error |
| --port | 3100 | HTTP server port |
| --host | 127.0.0.1 | Bind address |
| --redis-host | 127.0.0.1 | Redis host |
| --redis-port | 6379 | Redis port |
| --no-context | — | Disable Redis context sharing |
| -v, --verbose | — | Full output with decorations |
| --compact | — | Essential fields only |---
Development
$3
| Command | Description |
|:--------|:------------|
|
bun install | Install all dependencies |
| bun run build:bun | Bundle for Bun runtime |
| bun run start:bun | Start server on :3100 |
| bun run dev:bun | Watch mode with hot-reload |
| npm run build | TypeScript compilation for Node.js |
| npm run start | Start with Node.js |$3
| Command | Description |
|:--------|:------------|
|
bun run typecheck | Type-check without emitting |
| bun run lint | ESLint |
| bun run format | Prettier |
| bun run prepare | All checks (pre-commit) |$3
| Variable | Description |
|:---------|:------------|
|
OPENAI_API_KEY | OpenAI API key |
| ANTHROPIC_API_KEY | Anthropic API key |
| GEMINI_API_KEY | Google Gemini API key |
| GROQ_API_KEY | Groq API key |
| DEEPSEEK_API_KEY | DeepSeek API key |
| OPENROUTER_API_KEY | OpenRouter API key |
| PERPLEXITY_API_KEY | Perplexity API key (research tier) |
| MPC_MASTER_SECRET` | Master secret for MPC security tools |---
| Document | Description |
|----------|-------------|
| Technical Whitepaper (PDF) | Full architecture description covering protocol layers, cognition system, and LLM Magistrale with formal specifications |
| Architecture Overview | High-level system layers diagram |
| 3-Layer Bus Architecture | Detailed L3/L2/L1 bus design with bridges |
| Examples | 12 practical guides covering all major features |
---
Dawid Irzyk — dawid@kemdi.pl
Kemdi Sp. z o.o.
This project is licensed under the GNU General Public License v3.0 — see the LICENSE file for details.