An MCP server for orchestrating multiple AI agent backends (Claude Code, Codex, Ollama)
npm install ensemble-mcpAn MCP server for orchestrating multiple AI agent backends (Claude Code, Codex, Ollama).
- Multi-backend support: Spawn agents using Claude Code, Codex, or Ollama
- Configurable backends: Enable/disable backends, customize commands
- Circuit breaker: Safety limits to prevent runaway processes
- Activity sensing: Monitor agent working/idle status
- Multi-turn conversations: Send follow-up prompts to running agents
- Pagination: Handle large outputs efficiently
``bashRun with bunx
bunx ensemble-mcp
CLI Options
| Option | Default | Description |
|--------|---------|-------------|
|
--quiet, -q | false | Suppress startup banner |
| --model | - | Default model for all backends |
| --claude-model | - | Model for Claude (e.g., sonnet, claude-haiku-4-5) |
| --codex-model | - | Model for Codex (e.g., o3, gpt-5.1-codex) |
| --max-agents | 5 | Max concurrent active agents |
| --max-total | 20 | Max total agents (including completed) |
| --timeout | 30 | Max runtime per agent in minutes |
| --idle-timeout | 10 | Auto-terminate idle agents after N minutes |Note: Spawned Claude agents automatically bypass user hooks via
--settings flag.MCP Tools
| Tool | Description |
|------|-------------|
|
spawn_agent | Spawn a new agent using a configured backend |
| list_agents | List all running and completed agents |
| get_agent_output | Get paginated output from an agent |
| get_agent_status | Get activity status (WORKING/IDLE) |
| send_prompt_to_agent | Send follow-up prompt to running agent |
| release_agent | Terminate a specific agent |
| list_backends | List available backends |
| configure_backend | Enable/disable a backend |
| configure_circuit_breaker | Configure safety limits |
| terminate_all_agents | Emergency killswitch |Supported Backends
$3
Spawns headless Claude Code instances using stream-JSON mode for multi-turn conversations.`
spawn_agent backend="claude" prompt="Write a function to parse JSON"
spawn_agent backend="claude" prompt="Review this code" working_directory="/path/to/project"
`Multi-turn support: Claude agents stay alive for follow-up prompts via
send_prompt_to_agent.$3
Uses OpenAI Codex via bunx @openai/codex.`
spawn_agent backend="codex" prompt="Generate a REST API"
`$3
Local LLM instances (disabled by default). Configure model in baseArgs.`
First enable the backend
configure_backend backend="ollama" enabled=trueThen spawn agents
spawn_agent backend="ollama" prompt="Explain recursion"
`Circuit Breaker
The circuit breaker prevents runaway processes:
| Setting | Default | Description |
|---------|---------|-------------|
|
max_active_agents | 5 | Maximum concurrent running agents |
| max_total_agents | 20 | Maximum total agents (including completed) |
| max_runtime_minutes | 30 | Maximum runtime per agent |
| max_output_size_kb | 1024 | Maximum output size per agent |
| max_prompts_per_agent | 10 | Maximum follow-up prompts |
| auto_terminate_idle_minutes | 10 | Auto-terminate idle agents |Configure via:
`
configure_circuit_breaker max_active_agents=10 max_runtime_minutes=60
`MCP Configuration
Add to your Claude Code MCP config:
`json
{
"mcpServers": {
"ensemble": {
"command": "bunx",
"args": ["ensemble-mcp", "--quiet"]
}
}
}
`Or if using npx:
`json
{
"mcpServers": {
"ensemble": {
"command": "npx",
"args": ["ensemble-mcp", "--quiet"]
}
}
}
`Examples
$3
`
Spawn multiple agents for parallel research
spawn_agent backend="claude" prompt="Research the history of neural networks"
spawn_agent backend="claude" prompt="Analyze current trends in AI safety"
spawn_agent backend="claude" prompt="Compare different LLM architectures"Check status
get_agent_statusRetrieve results when idle
get_agent_output agent_id="" full_output=true
`$3
`
Start a Claude agent with context
spawn_agent backend="claude" prompt="Remember: the project uses TypeScript and React"
Returns agent ID like "abc12345"
Send follow-up prompts (agent retains context)
send_prompt_to_agent agent_id="abc12345" prompt="Write a Button component"
send_prompt_to_agent agent_id="abc12345" prompt="Now add hover states"Get accumulated output
get_agent_output agent_id="abc12345" full_output=trueRelease when done
release_agent agent_id="abc12345"
`Development
`bash
Clone and install
git clone https://github.com/y0usaf/ensemble
cd ensemble
bun installBuild
bun run buildWatch mode
bun run watchTest with MCP inspector
bun run inspector
``GPL-3.0