n8n community node for intelligent OpenRouter model selection based on task, budget, and benchmarks
npm install n8n-nodes-openrouter-selectorn8n community node for intelligent OpenRouter model selection based on task, budget, and benchmarks.
- Task-Based Selection: Optimized model recommendations for:
- Translation (Chinese ↔ English optimized)
- Coding & Development
- Data Analysis & Reasoning
- Vision & Image Analysis
- Conversational Chat
- Text Embedding
- Summarization
- Mathematical Reasoning
- Budget Awareness: Three budget tiers:
- Cheap: Lowest cost, quality secondary
- Balanced: Good price-performance ratio
- Premium: Best quality, cost no concern
- Benchmark-Based Scoring: Uses external benchmark data:
- Artificial Analysis (Intelligence, Coding, Math indices)
- LMSYS Chatbot Arena (Elo ratings)
- LiveBench (Coding, Math, Reasoning scores)
- Dynamic Model Override: Select specific models with real-time scoring preview
- Flexible Filtering:
- Minimum context length
- JSON mode requirement
- Vision/multimodal requirement
- Cost limits
- Provider whitelist/blacklist
1. Go to Settings → Community Nodes
2. Click Install a community node
3. Enter: n8n-nodes-openrouter-selector
4. Click Install
``bash`In your n8n custom nodes directory
cd ~/.n8n/custom
npm install n8n-nodes-openrouter-selector
`bash
git clone https://github.com/ecolights/n8n-nodes-openrouter-selector.git
cd n8n-nodes-openrouter-selector
pnpm install
pnpm build
Prerequisites
This node requires:
1. Supabase Database with the benchmark schema (see docs/BENCHMARK_SYSTEM.md):
-
models_catalog - OpenRouter model data (synced via separate workflow)
- model_name_mappings - Master mapping table (manually maintained)
- model_benchmarks - Benchmark scores (auto-synced weekly)
- task_profiles - Task-specific scoring weights
- unmatched_models - Review queue for new models2. n8n Workflow:
TN_benchmark_sync_artificial_analysis for weekly benchmark sync3. Credentials: Supabase URL and API key (service role for write access)
$3
The full schema with triggers, functions, and RLS policies is documented in docs/BENCHMARK_SYSTEM.md.
Core Tables Overview:
`sql
-- Model name mappings (Source of Truth - manually maintained)
CREATE TABLE model_name_mappings (
openrouter_id TEXT UNIQUE NOT NULL, -- e.g. "anthropic/claude-sonnet-4"
canonical_name TEXT NOT NULL, -- Display name
aa_name TEXT, -- Artificial Analysis name
aa_slug TEXT, -- AA URL slug
provider TEXT, -- anthropic, openai, google, etc.
verified BOOLEAN DEFAULT false
);-- Benchmark scores (auto-filled by sync workflow)
CREATE TABLE model_benchmarks (
openrouter_id TEXT REFERENCES model_name_mappings(openrouter_id),
-- Artificial Analysis
aa_intelligence_index DECIMAL(5,2),
aa_coding_index DECIMAL(5,2),
aa_math_index DECIMAL(5,2),
-- LMSYS Arena
lmsys_elo INTEGER,
-- LiveBench
livebench_overall DECIMAL(5,2),
livebench_coding DECIMAL(5,2),
-- Computed composites (via trigger)
composite_general DECIMAL(5,2),
composite_code DECIMAL(5,2),
composite_math DECIMAL(5,2)
);
-- Task-specific scoring weights
CREATE TABLE task_profiles (
task_name TEXT UNIQUE NOT NULL, -- general, code, translation, etc.
weight_aa_intelligence DECIMAL(3,2),
weight_aa_coding DECIMAL(3,2),
weight_lmsys_elo DECIMAL(3,2),
weight_livebench DECIMAL(3,2),
boost_anthropic DECIMAL(3,2),
boost_openai DECIMAL(3,2),
boost_deepseek DECIMAL(3,2)
);
`Usage
$3
1. Add the OpenRouter Model Selector node to your workflow
2. Configure credentials (Supabase URL + API Key)
3. Select a Task Category (e.g., "coding")
4. Select a Budget (e.g., "balanced")
5. Execute to get the recommended model
$3
Full Output (default):
`json
{
"recommended": {
"modelId": "anthropic/claude-sonnet-4",
"provider": "anthropic",
"displayName": "Claude Sonnet 4",
"contextLength": 200000,
"supportsJson": true,
"modality": "text+image->text",
"pricing": {
"promptPer1kUsd": 0.003,
"completionPer1kUsd": 0.015,
"combinedPer1kUsd": 0.009
},
"score": 87.5,
"scoreBreakdown": {
"benchmarkFit": 38,
"taskFit": 28,
"budgetFit": 18,
"capabilityFit": 9,
"providerBonus": 1.5
},
"reasoning": "Excellent benchmark performance for Coding & Development, ideal balanced pricing, anthropic provider bonus (+15%)."
},
"alternatives": [...],
"queryMetadata": {
"task": "coding",
"budget": "balanced",
"totalModelsEvaluated": 313,
"modelsPassingFilters": 187,
"executionTimeMs": 245
}
}
`$3
Connect the output to an HTTP Request node or OpenRouter integration:
`
[OpenRouter Model Selector] → [HTTP Request to OpenRouter API]
URL: https://openrouter.ai/api/v1/chat/completions
Body: { "model": "={{$json.recommended.modelId}}", ... }
`Scoring Algorithm
The scoring formula is deterministic and based on external benchmarks:
`
score = (benchmark_fit × 0.4) + (task_fit × 0.3) + (budget_fit × 0.2) + (capability_fit × 0.1) × provider_boost
`$3
| Component | Weight | Description |
|-----------|--------|-------------|
| Benchmark Fit | 40% | Score from external benchmarks (AA, LMSYS, LiveBench) |
| Task Fit | 30% | How well the model matches task requirements |
| Budget Fit | 20% | Cost alignment with budget preference |
| Capability Fit | 10% | Context length, JSON support, verification status |
| Provider Boost | ×1.0-1.2 | Task-specific provider bonuses |
$3
| Task | Provider Boosts |
|------|-----------------|
| Translation | DeepSeek +20%, Qwen +15%, Anthropic +10% |
| Coding | Anthropic +15%, OpenAI +10%, DeepSeek +8% |
| Analysis | Anthropic +15%, OpenAI +10%, Google +5% |
| Vision | OpenAI +15%, Google +12%, Anthropic +8% |
| Math | DeepSeek +15%, Qwen +12%, OpenAI +10% |
Configuration
$3
| Parameter | Type | Description |
|-----------|------|-------------|
| Task Category | Dropdown | Type of task (coding, translation, etc.) |
| Budget | Dropdown | Cost preference (cheap, balanced, premium) |
| Model Override | Dynamic Dropdown | Override with specific model |
| Filters | Collection | Advanced filtering options |
| Options | Collection | Output configuration |
$3
| Filter | Type | Default | Description |
|--------|------|---------|-------------|
| Min Context Length | Number | 8000 | Minimum tokens |
| Require JSON Mode | Boolean | false | Only JSON-capable models |
| Require Vision | Boolean | false | Only multimodal models |
| Max Cost per 1K | Number | 0 | Cost limit (0 = no limit) |
| Provider Whitelist | Multi-select | [] | Include only these providers |
| Provider Blacklist | Multi-select | [] | Exclude these providers |
Benchmark Sync Workflow
The node requires benchmark data to be synced weekly via n8n workflow.
$3
Trigger: Weekly (Sunday 03:00 UTC) + Manual + Webhook
Data Flow:
`
[1. Fetch Artificial Analysis] ──────┐
│
[2. Fetch OpenRouter Catalog] ────────┼──► [4. Merge All Data]
│ │
[3. Fetch Existing Mappings] ────────┘ ▼
[5. Process & Match Models]
│
┌──────────────┴──────────────┐
▼ ▼
[6. Upsert Benchmarks] [7. Store Unmatched]
│ │
└──────────────┬──────────────┘
▼
[8. Merge Results]
│
▼
[9. Telegram Notification]
`Detailed Documentation: See docs/BENCHMARK_SYSTEM.md
$3
`bash
Via n8n Webhook
curl -X POST https://n8n.dev.ecolights.de/webhook/benchmark-sync
`Development
`bash
Install dependencies
pnpm installBuild
pnpm buildWatch mode
pnpm devLint
pnpm lintFormat
pnpm format
``MIT
EcoLights (dev@ecolights.de)