Unified Observability & Orchestration SDK for Multi-Model AI Applications
npm install llm-orchestraUnified Observability & Orchestration SDK for Multi-Model AI Applications






---
Building production LLM applications is painful:
- Multi-model chaos - Switching between Claude, GPT-4, Gemini requires different SDKs, error handling, and retry logic
- Blind spots - No unified view of costs, latency, token usage across providers
- Debugging nightmares - Tracing a request through chains, agents, and tool calls is nearly impossible
- Cost explosions - No visibility into which prompts/models are eating your budget
LLM Orchestra provides a unified layer for orchestrating and observing multi-model AI applications.
``typescript
import { Orchestra } from 'llm-orchestra';
const orchestra = new Orchestra({
providers: ['anthropic', 'openai', 'google'],
observability: {
tracing: true,
metrics: true,
costTracking: true
}
});
// Unified interface - same code, any model
const response = await orchestra.complete({
model: 'claude-3-opus', // or 'gpt-4', 'gemini-pro'
messages: [{ role: 'user', content: 'Hello!' }],
fallback: ['gpt-4-turbo', 'gemini-pro'], // Automatic failover
tags: ['production', 'chat-feature'] // For cost allocation
});
// Full observability out of the box
console.log(response.meta);
// {
// latency: 1234,
// tokens: { input: 10, output: 50 },
// cost: 0.0023,
// traceId: 'abc-123',
// model: 'claude-3-opus',
// provider: 'anthropic'
// }
`
`bashTypeScript/Node.js
npm install llm-orchestra
$3
`typescript
import { Orchestra } from 'llm-orchestra';// Initialize with your API keys
const orchestra = new Orchestra({
providers: {
anthropic: { apiKey: process.env.ANTHROPIC_API_KEY },
openai: { apiKey: process.env.OPENAI_API_KEY },
}
});
// Make requests with full observability
const result = await orchestra.complete({
model: 'claude-3-sonnet',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Explain quantum computing in simple terms.' }
]
});
`$3
`typescript
import { Orchestra, trace } from 'llm-orchestra';// Automatic tracing for complex flows
const result = await trace('user-question-flow', async (span) => {
// Step 1: Classify intent
const intent = await orchestra.complete({
model: 'claude-3-haiku',
messages: [{ role: 'user', content: userQuestion }],
tags: ['intent-classification']
});
span.addEvent('intent-classified', { intent: intent.content });
// Step 2: Route to appropriate model
const response = await orchestra.complete({
model: intent.content === 'complex' ? 'claude-3-opus' : 'claude-3-sonnet',
messages: [...],
tags: ['response-generation']
});
return response;
});
`Architecture
`
┌─────────────────────────────────────────────────────────────────┐
│ Your Application │
└─────────────────────────────┬───────────────────────────────────┘
│
┌─────────────────────────────▼───────────────────────────────────┐
│ LLM Orchestra SDK │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Routing │ │ Caching │ │ Tracing │ │
│ │ Engine │ │ Layer │ │ Context │ │
│ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ │
│ │ │ │ │
│ ┌──────▼────────────────▼────────────────▼──────┐ │
│ │ Provider Adapters │ │
│ │ ┌─────────┐ ┌─────────┐ ┌─────────┐ │ │
│ │ │Anthropic│ │ OpenAI │ │ Google │ • • • │ │
│ │ └─────────┘ └─────────┘ └─────────┘ │ │
│ └───────────────────────────────────────────────┘ │
└─────────────────────────────┬───────────────────────────────────┘
│
┌─────────────────────────────▼───────────────────────────────────┐
│ Observability Backend │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Traces │ │ Metrics │ │ Costs │ │
│ │ Store │ │ Store │ │ Tracker │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
└─────────────────────────────────────────────────────────────────┘
`Roadmap
$3
- [x] Unified provider interface (Claude, GPT-4, Gemini)
- [x] Basic tracing and cost tracking
- [x] TypeScript SDK
- [x] Local dashboard$3
- [x] Python SDK
- [x] Semantic caching
- [x] Automatic failover and retries
- [x] OpenTelemetry export$3
- [x] Multi-agent coordination primitives
- [x] Tool call tracing
- [x] Workflow engine
- [x] Memory backends$3
- [x] Cloud dashboard (v0.3.0)
- [x] Team management (v0.3.0)
- [x] RBAC and audit logs (v0.3.0)$3
- [x] Security scanning (CodeQL, dependency scanning, secret detection)
- [x] Encryption at rest (self-hosted PostgreSQL)
- [x] Azure AD SSO/OIDC integrationContributing
We welcome contributions! See CONTRIBUTING.md for guidelines.
$3
`bash
Clone the repo
git clone https://github.com/MegaPhoenix92/llm-orchestra.git
cd llm-orchestraInstall dependencies
npm installRun tests
npm testStart local dashboard
npm run dashboard
`$3
The cloud dashboard requires PostgreSQL. We provide a Docker setup for local development:
`bash
Clone the repo
git clone https://github.com/MegaPhoenix92/llm-orchestra.git
cd llm-orchestraStart PostgreSQL in Docker
docker-compose up -dCopy environment template
cp packages/dashboard/.env.example packages/dashboard/.envInstall dependencies
npm installPush database schema
npm run db:push -w llm-orchestra-dashboardRun all tests
npm testStart the cloud dashboard
npm run dev -w llm-orchestra-dashboard
`#### Docker Services
| Service | Port | Description |
|---------|------|-------------|
| PostgreSQL | 5436 | Database for dashboard (mapped from container's 5432) |
#### Environment Variables
Copy
.env.example to .env and configure:| Variable | Description | Default |
|----------|-------------|---------|
|
DATABASE_URL | PostgreSQL connection string | postgresql://orchestra:orchestra_dev@localhost:5436/llm_orchestra |
| JWT_SECRET | Secret for JWT tokens | (required) |
| ENCRYPTION_KEY | Optional encryption for secrets at rest | (optional) |#### Useful Commands
`bash
Start database
docker-compose up -dStop database
docker-compose downReset database (delete all data)
docker-compose down -v && docker-compose up -dView database logs
docker-compose logs -f postgresConnect to database
docker exec -it llm-orchestra-db psql -U orchestra -d llm_orchestra
``Built by TROZLAN — We're building the future of AI-powered enterprise solutions, including multi-agent orchestration and MCP infrastructure.
LLM Orchestra is born from our experience building production AI systems that coordinate multiple models and agents.
MIT License - See LICENSE for details.
---
Star this repo if you're interested in better LLM observability!