AI-powered feature development loop CLI
npm install wiggum-cli
Plug into any codebase. Generate specs. Ship features while you sleep.
Quick Start ·
How It Works ·
Website ·
Issues
---
Wiggum is an AI agent that plugs into any codebase and makes it ready for autonomous feature development — no configuration, no boilerplate.
It works in two phases. First, Wiggum itself is the agent: it scans your project, detects your stack, and runs an AI-guided interview to produce detailed specs, prompts, and scripts — all tailored to your codebase. Then it delegates the actual coding to Claude Code, Codex, or any CLI-based coding agent, running an autonomous implement → test → fix loop until the feature ships.
Plug & play. Point it at a repo. It figures out the rest.
```
Wiggum (agent) Coding Agent
┌────────────────────────────┐ ┌────────────────────┐
│ │ │ │
│ Scan ──▶ Interview ──▶ Spec ──▶ Run loops │
│ detect AI-guided .ralph/ implement │
│ 80+ tech questions specs test + fix │
│ plug&play prompts guides until done │
│ │ │ │
└────────────────────────────┘ └────────────────────┘
runs in your terminal Claude Code / Codex
---
`bash`
npm install -g wiggum-cli
Then, in your project:
`bash`
wiggum init # Scan project, configure AI provider
wiggum new user-auth # AI interview → feature spec
wiggum run user-auth # Autonomous coding loop
Or skip the global install:
`bash`
npx wiggum-cli init
---
🔍 Smart Detection — Auto-detects 80+ technologies: frameworks, databases, ORMs, testing tools, deployment targets, MCP servers, and more.
🎙️ AI-Guided Interviews — Generates detailed, project-aware feature specs through a structured 4-phase interview. No more blank-page problem.
🔁 Autonomous Coding Loops — Hands specs to Claude Code (or any agent) and runs implement → test → fix cycles with git worktree isolation.
📋 Tailored Prompts — Generates prompts, guides, and scripts specific to your stack. Not generic templates — actual context about your project.
🔌 BYOK — Bring your own API keys. Works with Anthropic, OpenAI, or OpenRouter. Keys stay local, never leave your machine.
🖥️ Interactive TUI — Full terminal interface with persistent session state. No flags to remember.
---
`bash`
wiggum init
Wiggum reads your package.json, config files, source tree, and directory structure. A multi-agent AI system then analyzes the results:
1. Planning Orchestrator — creates an analysis plan based on detected stack
2. Parallel Workers — Context Enricher explores code while Tech Researchers gather best practices
3. Synthesis — merges results, detects relevant MCP servers
4. Evaluator-Optimizer — QA loop that validates and refines the output
Output: a .ralph/ directory with configuration, prompts, guides, and scripts — all tuned to your project.
`bash`
wiggum new payment-flow
An AI-guided interview walks you through:
| Phase | What happens |
|-------|-------------|
| Context | Share reference URLs, docs, or files |
| Goals | Describe what you want to build |
| Interview | AI asks 3–5 clarifying questions |
| Generation | Produces a detailed feature spec in .ralph/specs/ |
`bash`
wiggum run payment-flow
Wiggum hands the spec + prompts + project context to your coding agent and runs an autonomous loop:
``
implement → run tests → fix failures → repeat
Supports git worktree isolation (--worktree) for running multiple features in parallel.
---
Running wiggum with no arguments opens the TUI — the recommended way to use Wiggum:
`bash`
$ wiggum
| Command | Alias | Description |
|---------|-------|-------------|
| /init | /i | Scan project, configure AI provider |/new
| | /n | AI interview → feature spec |/run
| | /r | Run autonomous coding loop |/monitor
| | /m | Monitor a running feature |/sync
| | /s | Re-scan project, update context |/help
| | /h | Show commands |/exit
| | /q | Exit |
---
``
.ralph/
├── ralph.config.cjs # Stack detection results + loop config
├── prompts/
│ ├── PROMPT.md # Implementation prompt
│ ├── PROMPT_feature.md # Feature planning
│ ├── PROMPT_e2e.md # E2E testing
│ ├── PROMPT_verify.md # Verification
│ └── PROMPT_review.md # PR review
├── guides/
│ ├── AGENTS.md # Agent instructions (CLAUDE.md)
│ ├── FRONTEND.md # Frontend patterns
│ ├── SECURITY.md # Security guidelines
│ └── PERFORMANCE.md # Performance patterns
├── scripts/
│ └── feature-loop.sh # Main loop script
├── specs/
│ └── _example.md # Example spec template
└── LEARNINGS.md # Accumulated project learnings
---
wiggum init [options]
Scan the project, detect the tech stack, generate configuration.
| Flag | Description |
|------|-------------|
| --provider | AI provider: anthropic, openai, openrouter (default: anthropic) |-i, --interactive
| | Stay in interactive mode after init |-y, --yes
| | Accept defaults, skip confirmations |
wiggum new <feature> [options]
Create a feature specification via AI-powered interview.
| Flag | Description |
|------|-------------|
| --ai | Use AI interview (default in TUI mode) |--provider
| | AI provider for spec generation |--model
| | Model to use |-e, --edit
| | Open in editor after creation |-f, --force
| | Overwrite existing spec |
wiggum run <feature> [options]
Run the autonomous development loop.
| Flag | Description |
|------|-------------|
| --worktree | Git worktree isolation (parallel features) |--resume
| | Resume an interrupted loop |--model
| | Claude model (opus, sonnet) |--max-iterations
| | Max iterations (default: 50) |--max-e2e-attempts
| | Max E2E retries (default: 3) |
wiggum monitor <feature> [options]
Track feature development progress in real-time.
| Flag | Description |
|------|-------------|
| --interval | Refresh interval (default: 5) |--bash
| | Use bash monitor script |
---
Wiggum requires an API key from one of these providers:
| Provider | Environment Variable |
|----------|---------------------|
| Anthropic | ANTHROPIC_API_KEY |OPENAI_API_KEY
| OpenAI | |OPENROUTER_API_KEY
| OpenRouter | |
Optional services for deeper analysis:
| Service | Variable | Purpose |
|---------|----------|---------|
| Tavily | TAVILY_API_KEY | Web search for current best practices |CONTEXT7_API_KEY
| Context7 | | Up-to-date documentation lookup |
Keys are stored in .ralph/.env.local and never leave your machine.
---
🔍 Detection Capabilities (80+ technologies)
| Category | Technologies |
|----------|-------------|
| Frameworks | Next.js (App/Pages Router), React, Vue, Nuxt, Svelte, SvelteKit, Remix, Astro |
| Package Managers | npm, yarn, pnpm, bun |
| Testing | Jest, Vitest, Playwright, Cypress |
| Styling | Tailwind CSS, CSS Modules, Styled Components, Emotion, Sass |
| Databases | PostgreSQL, MySQL, SQLite, MongoDB, Redis |
| ORMs | Prisma, Drizzle, TypeORM, Mongoose, Kysely |
| APIs | REST, GraphQL, tRPC, OpenAPI |
| State | Zustand, Jotai, Redux, Pinia, Recoil, MobX, Valtio |
| UI Libraries | shadcn/ui, Radix, Material UI, Chakra UI, Ant Design, Headless UI |
| Auth | NextAuth.js, Clerk, Auth0, Supabase Auth, Lucia, Better Auth |
| Analytics | PostHog, Mixpanel, Amplitude, Google Analytics, Plausible |
| Payments | Stripe, Paddle, LemonSqueezy |
| Email | Resend, SendGrid, Postmark, Mailgun |
| Deployment | Vercel, Netlify, Railway, Fly.io, Docker, AWS |
| Monorepos | Turborepo, Nx, Lerna, pnpm workspaces |
| MCP | Detects MCP server/client configs, recommends servers based on stack |
---
- Node.js >= 18.0.0
- Git (for worktree features)
- An AI provider API key (Anthropic, OpenAI, or OpenRouter)
- Claude Code or another coding agent (for wiggum run)
---
Contributions welcome! See CONTRIBUTING.md for guidelines.
`bash``
git clone https://github.com/federiconeri/wiggum-cli.git
cd wiggum-cli
npm install
npm run build
npm test
---
MIT + Commons Clause — see LICENSE.
You can use, modify, and distribute Wiggum freely. You may not sell the software or a service whose value derives substantially from Wiggum's functionality.
---
Built on the Ralph loop technique by Geoffrey Huntley