CLI tool for managing Ralph Wiggum AI development workflows
npm install ralph-wiggum-cliA command-line tool for managing Ralph Wiggum AI development workflows across projects.
Based on the Ralph Wiggum Technique - an AI development methodology that uses autonomous coding loops with AI agents.
This implementation takes a task-first approach: instead of giving agents autonomy to work through specs, we decompose specs into small tasks during planning, then feed tasks to agents one at a time. Smaller tasks keep agents focused and out of the "dumb zone."
> Meta: This CLI was built by pointing an AI agent at the Ralph Wiggum technique repo, then using Ralph itself to implement and refine the tool. Recursive AI development in action.
``bash`
bun install -g ralph-wiggum-cli
Or with npm:
`bash`
npm install -g ralph-wiggum-cli
For local development:
`bash`
git clone
cd ralph-cli
bun install
bun link
Ralph includes a skill file that teaches AI agents how to use the CLI.
`bash`
cp -r .claude/skills/ralph ~/.claude/skills/
`bashInitialize Ralph in your project
cd your-project
ralph-wiggum-cli init
Commands
| Command | Description |
| ------------------------- | ---------------------------------------------- |
|
ralph-wiggum-cli init | Initialize Ralph in the current project |
| ralph-wiggum-cli plan | Run planning mode (analyze specs, create plan) |
| ralph-wiggum-cli build | Run build mode (execute tasks from plan) |
| ralph-wiggum-cli stop | Stop running session |
| ralph-wiggum-cli status | Show project status and sessions |
| ralph-wiggum-cli agents | List available AI agents |Options
$3
`bash
ralph-wiggum-cli init [options] -a, --agent AI agent for both modes (default: claude)
-m, --model Model for both modes
--plan-agent AI agent for planning mode
--plan-model Model for planning mode
--build-agent AI agent for building mode
--build-model Model for building mode
-f, --force Force reinitialization
`$3
`bash
ralph-wiggum-cli plan [options]
ralph-wiggum-cli build [options] -a, --agent Override the configured agent
-m, --model Override the configured model
-v, --verbose Enable verbose output (shows agent stdout/stderr)
`Supported Agents
| Agent | Description |
| ---------- | -------------------------------------------------------------------------- |
|
claude | Claude Code by Anthropic |
| amp | Amp Code by Sourcegraph |
| droid | Factory Droid CLI |
| opencode | OpenCode CLI |
| cursor | Cursor Agent CLI |
| codex | OpenAI Codex CLI |
| gemini | Gemini CLI by Google |
| pi | Pi coding agent |Project Structure
After
ralph-wiggum-cli init, your project will have:`
your-project/
└── .ralph-wiggum/
├── config.json # Project config and session history
├── PROMPT_plan.md # Planning mode prompt (customizable)
├── implementation.json # Task tracking (generated by plan mode)
├── GUARDRAILS.md # Compliance rules (before/after checks)
├── PROGRESS.md # Audit trail of completed work
├── specs/ # Specification files
│ └── example.md # Example spec template
└── logs/ # Session logs (gitignored)
`How It Works
This CLI implements a task-first variation of the Ralph Wiggum technique.
Traditional Ralph implementations give agents autonomy to pick tasks and work spec-by-spec. This approach is different: we break specs into small, focused tasks during planning, then feed them to the agent one at a time during building.
Why task-first?
- Smaller context = smarter agent. Large specs push agents into the "dumb zone" where they lose focus and make mistakes. Small tasks keep them sharp.
- Deterministic execution. Tasks are picked by priority, not agent judgment. You control the order.
- Better progress tracking. Each task completion is a checkpoint. If something fails, you know exactly where.
$3
1. AI reads all specs in
.ralph-wiggum/specs/
2. Audits the codebase to understand current state
3. Breaks each spec into small, actionable tasks with acceptance criteria
4. Outputs implementation.json with prioritized task queueThe planner's job is to think deeply about the work and decompose it properly. This is where complexity lives.
$3
1. Picks the next pending task (by spec priority, then task order)
2. Injects the task directly into the agent's context, along with a reference to the larger spec for background
3. Agent implements just that one task
4. On completion, marks task done and loops to the next
5. Continues until all tasks across all specs are complete
The builder's job is simple: execute one small task at a time. No decisions, no prioritization—just focused implementation.
$3
Agents signal completion via stdout:
-
- Task completed successfully
- - Task blocked (missing dependency, unclear requirement)$3
Ralph supports Telegram notifications for loop events (start, task complete, blocked, done). Configure during
init or in config.json`.MIT