Command-line interface for Ralph Orchestrator
npm install @ralph-orchestrator/ralph-cli





A hat-based orchestration framework that keeps AI agents in a loop until the task is done.
> "Me fail English? That's unpossible!" - Ralph Wiggum
Documentation | Getting Started | Presets
``bash`
npm install -g @ralph-orchestrator/ralph-cli
`bash`
brew install ralph-orchestrator
`bash`
cargo install ralph-cli
`bash1. Initialize Ralph with your preferred backend
ralph init --backend claude
Ralph iterates until it outputs
LOOP_COMPLETE or hits the iteration limit.For simpler tasks, skip planning and run directly:
`bash
ralph run -p "Add input validation to the /users endpoint"
`Web Dashboard (Alpha)
> Alpha: The web dashboard is under active development. Expect rough edges and breaking changes.

Ralph includes a web dashboard for monitoring and managing orchestration loops.
`bash
ralph web # starts both servers + opens browser
ralph web --no-open # skip browser auto-open
ralph web --backend-port 4000 # custom backend port
ralph web --frontend-port 8080 # custom frontend port
`Requirements: Node.js >= 18 and npm. On first run,
ralph web will auto-detect missing node_modules and run npm install for you.To set up Node.js:
`bash
Option 1: nvm (recommended)
nvm install # reads .nvmrcOption 2: direct install
https://nodejs.org/
`For development:
`bash
npm install # install dependencies
npm run dev # run both servers (backend:3000, frontend:5173)
npm run test:server # backend tests
npm run test # all tests
`What is Ralph?
Ralph implements the Ralph Wiggum technique — autonomous task completion through continuous iteration. It supports:
- Multi-Backend Support — Claude Code, Kiro, Gemini CLI, Codex, Amp, Copilot CLI, OpenCode
- Hat System — Specialized personas coordinating through events
- Backpressure — Gates that reject incomplete work (tests, lint, typecheck)
- Memories & Tasks — Persistent learning and runtime work tracking
- 31 Presets — TDD, spec-driven, debugging, and more
RObot (Human-in-the-Loop)
Ralph supports human interaction during orchestration via Telegram. Agents can ask questions and block until answered; humans can send proactive guidance at any time.
Quick onboarding (Telegram):
`bash
ralph bot onboard --telegram # guided setup (token + chat id)
ralph bot status # verify config
ralph bot test # send a test message
ralph run -c ralph.bot.yml -p "Help the human"
``yaml
ralph.yml
RObot:
enabled: true
telegram:
bot_token: "your-token" # Or RALPH_TELEGRAM_BOT_TOKEN env var
`- Agent questions — Agents emit
human.interact events; the loop blocks until a response arrives or times out
- Proactive guidance — Send messages anytime to steer the agent mid-loop
- Parallel loop routing — Messages route via reply-to, @loop-id prefix, or default to primary
- Telegram commands — /status, /tasks, /restart` for real-time loop visibilitySee the Telegram guide for setup instructions.
Full documentation is available at mikeyobrien.github.io/ralph-orchestrator:
- Installation
- Quick Start
- Configuration
- CLI Reference
- Presets
- Concepts: Hats & Events
- Architecture
Contributions are welcome! See CONTRIBUTING.md for guidelines and CODE_OF_CONDUCT.md for community standards.
MIT License — See LICENSE for details.
- Geoffrey Huntley — Creator of the Ralph Wiggum technique
- Strands Agents SOP — Agent SOP framework
- ratatui — Terminal UI framework
---
"I'm learnding!" - Ralph Wiggum