Turn your PRD into a ready-to-work beads project in one command
npm install prd-parser> š§ Active Development - This project is new and actively evolving. Expect breaking changes. Contributions and feedback welcome!
Turn your PRD into a ready-to-work beads project in one command.
prd-parser uses LLM guardrails to transform Product Requirements Documents into a hierarchical issue structure (Epics ā Tasks ā Subtasks) and creates them directly in beads - the git-backed issue tracker for AI-driven development.
``bash`One command: PRD ā structured beads issues
prd-parser parse ./docs/prd.md
Starting a new project is exciting. You have a vision, maybe a PRD, and you're ready to build. But then:
1. The breakdown problem - You need to turn that PRD into actionable tasks. This is tedious and error-prone. You lose context as you go.
2. The context problem - By the time you're implementing subtask #47, you've forgotten why it matters. What was the business goal? Who are the users? What constraints apply?
3. The handoff problem - If you're using AI to help implement, it needs that context too. Copy-pasting from your PRD for every task doesn't scale.
prd-parser + beads solves all three. Write your PRD once, run one command, and get a complete project structure with context propagated to every level - ready for you or Claude to start implementing.
For greenfield projects, this is the fastest path from idea to structured, trackable work:
| Without prd-parser | With prd-parser |
|-------------------|-----------------|
| Read PRD, manually create issues | One command |
| Forget context by subtask #10 | Context propagated everywhere |
| Testing requirements? Maybe later | Testing enforced at every level |
| Dependencies tracked in your head | Dependencies explicit and tracked |
| Copy-paste context for AI helpers | AI has full context in every issue |
How it works: prd-parser uses Go struct guardrails to force the LLM to output valid, hierarchical JSON with:
- Context propagation - Business purpose flows from PRD ā Epic ā Task ā Subtask
- Testing at every level - Unit, integration, type, and E2E requirements enforced
- Dependencies tracked - Issues know what blocks them
- Direct beads integration - Issues created with one command, ready to work
Via npm/bun (easiest):
`bash`
npm install -g prd-parseror
bun install -g prd-parseror
npx prd-parser parse ./docs/prd.md # run without installing
Via Go:
`bash`
go install github.com/dhabedank/prd-parser@latest
From source:
`bash`
cd /tmp && git clone https://github.com/dhabedank/prd-parser.git && cd prd-parser && make install
If you see "Make sure ~/go/bin is in your PATH", run:
`bash`
echo 'export PATH="$HOME/go/bin:$PATH"' >> ~/.zshrc && source ~/.zshrc
Now go back to your project.
`bash`
mkdir my-project && cd my-project
git init
bd init --prefix my-project
Create docs/prd.md with your product requirements. Include:
- What you're building and why
- Who the target users are
- Technical constraints
- Key features
Example:
`markdownTask Management CLI
$3
`bash
prd-parser parse docs/prd.md
`Full context mode is enabled by default - every generation stage has access to your original PRD, producing the most coherent results.
That's it. Your PRD is now a structured beads project with readable hierarchical IDs:
`bash
$ bd list
ā my-project-e1 [P1] [epic] - Core Task Management System
ā my-project-e1t1 [P0] [task] - Implement Task Data Model
ā my-project-e1t1s1 [P2] [task] - Define Task struct with JSON tags
ā my-project-e1t1s2 [P2] [task] - Implement JSON file storage
ā my-project-e1t2 [P1] [task] - Build CLI Interface
ā my-project-e2 [P1] [epic] - User Authentication
...
`IDs follow a logical hierarchy:
e1 (epic 1) ā e1t1 (task 1) ā e1t1s1 (subtask 1). Use bd show to see parent/children relationships.$3
`bash
See what's ready to work on
bd readyPick an issue and let Claude implement it
bd show my-project-4og # Shows full context, testing requirementsOr let Claude pick and work autonomously
(beads integrates with Claude Code via the beads skill)
`What prd-parser Creates
$3
`
Epic: Core Task Management System
āāā Task: Implement Task Data Model
ā āāā Subtask: Define Task struct with JSON tags
ā āāā Subtask: Implement JSON file storage
āāā Task: Build CLI Interface
ā āāā Subtask: Implement create command
ā āāā Subtask: Implement list command
āāā ...
`$3
Every issue includes propagated context so implementers understand WHY:
`markdown
Context:
- Business Context: Developers need fast, frictionless task management
- Target Users: Terminal-first developers who want <100ms operations
- Success Metrics: All CRUD operations complete in under 100ms
`$3
Every issue specifies what testing is needed:
`markdown
Testing Requirements:
- Unit Tests: Task struct validation, JSON marshaling/unmarshaling
- Integration Tests: Full storage layer integration, concurrent access
- Type Tests: Go struct tags validation, JSON schema compliance
`$3
The LLM evaluates each task and assigns appropriate priority (not just a default):
| Priority | When to Use |
|----------|-------------|
| P0 (critical) | Blocks all work, security issues, launch blockers |
| P1 (high) | Core functionality, enables other tasks |
| P2 (medium) | Important features, standard work |
| P3 (low) | Nice-to-haves, polish |
| P4 (very-low) | Future considerations, can defer indefinitely |
Foundation/setup work gets higher priority. Polish/UI tweaks get lower priority.
$3
Issues are automatically labeled based on:
- Layer: frontend, backend, api, database, infra
- Domain: auth, payments, search, notifications
- Skill: react, go, sql, typescript
- Type: setup, feature, refactor, testing
Labels are extracted from the PRD's tech stack and feature descriptions.
$3
- Epics include acceptance criteria for when the epic is complete
- Tasks include design notes for technical approach
$3
All items include time estimates that flow to beads:
- Epics: estimated days
- Tasks: estimated hours
- Subtasks: estimated minutes
$3
Issues are linked with proper blocking relationships:
- Tasks depend on setup tasks
- Subtasks depend on parent task completion
- Cross-epic dependencies are tracked
Configuration
$3
The easiest way to configure prd-parser is with the interactive setup wizard:
`bash
prd-parser setup
`The wizard guides you through selecting models for each parsing stage:
- Epic Model (Stage 1): Generates epics from your PRD
- Task Model (Stage 2): Generates tasks for each epic
- Subtask Model (Stage 3): Generates subtasks for each task
Configuration is saved to
~/.prd-parser.yaml.To reset to defaults:
`bash
prd-parser setup --reset
`$3
Use different models for different stages to optimize for cost vs. quality:
`bash
Use Opus for epics (complex), Sonnet for tasks, Haiku for subtasks (fast)
prd-parser parse docs/prd.md \
--epic-model claude-opus-4-20250514 \
--task-model claude-sonnet-4-20250514 \
--subtask-model claude-3-5-haiku-20241022
`Or configure in
~/.prd-parser.yaml:`yaml
epic_model: claude-opus-4-20250514
task_model: claude-sonnet-4-20250514
subtask_model: claude-3-5-haiku-20241022
`Command-line flags always override config file settings.
$3
`bash
Basic parse (full context mode is on by default)
prd-parser parse ./prd.mdControl structure size
prd-parser parse ./prd.md --epics 5 --tasks 8 --subtasks 4Set default priority
prd-parser parse ./prd.md --priority highChoose testing level
prd-parser parse ./prd.md --testing comprehensive # or minimal, standardPreview without creating (dry run)
prd-parser parse ./prd.md --dry-runSave/resume from checkpoint (useful for large PRDs)
prd-parser parse ./prd.md --save-json checkpoint.json
prd-parser parse --from-json checkpoint.jsonDisable full context mode (not recommended)
prd-parser parse ./prd.md --full-context=false
`$3
| Flag | Short | Default | Description |
|------|-------|---------|-------------|
|
--epics | -e | 3 | Target number of epics |
| --tasks | -t | 5 | Target tasks per epic |
| --subtasks | -s | 4 | Target subtasks per task |
| --priority | -p | medium | Default priority (critical/high/medium/low) |
| --testing | | comprehensive | Testing level (minimal/standard/comprehensive) |
| --llm | -l | auto | LLM provider (auto/claude-cli/codex-cli/anthropic-api) |
| --model | -m | | Model to use (provider-specific) |
| --epic-model | | | Model for epic generation (Stage 1) |
| --task-model | | | Model for task generation (Stage 2) |
| --subtask-model | | | Model for subtask generation (Stage 3) |
| --no-progress | | false | Disable TUI progress display |
| --multi-stage | | false | Force multi-stage parsing |
| --single-shot | | false | Force single-shot parsing |
| --smart-threshold | | 300 | Line count for auto multi-stage (0 to disable) |
| --full-context | | true | Pass PRD to all stages (use =false to disable) |
| --validate | | false | Run validation pass to check for gaps |
| --no-review | | false | Disable automatic LLM review pass (review ON by default) |
| --interactive | | false | Human-in-the-loop mode (review epics before task generation) |
| --output | -o | beads | Output adapter (beads/json) |
| --output-path | | | Output path for JSON adapter |
| --dry-run | | false | Preview without creating items |
| --from-json | | | Resume from saved JSON checkpoint (skip LLM) |
| --save-json | | | Save generated JSON to file (for resume) |
| --config | | | Config file path (default: .prd-parser.yaml) |$3
prd-parser automatically chooses the best parsing strategy based on PRD size:
- Small PRDs (< 300 lines): Single-shot parsing (faster)
- Large PRDs (ā„ 300 lines): Multi-stage parallel parsing (more reliable)
Override with
--single-shot or --multi-stage flags, or adjust threshold with --smart-threshold.$3
Full context mode is enabled by default. Every stage gets the original PRD as their "north star":
`bash
prd-parser parse docs/prd.md # full context is on by default
`To disable (not recommended):
`bash
prd-parser parse docs/prd.md --full-context=false
`Why this matters:
| Mode | Stage 1 (Epics) | Stage 2 (Tasks) | Stage 3 (Subtasks) |
|------|----------------|-----------------|-------------------|
| Default | Full PRD | Epic summary only | Task summary only |
|
--full-context | Full PRD | Epic + PRD | Task + PRD |With
--full-context, each agent:
- Stays grounded in original requirements
- Doesn't invent features not in the PRD
- Doesn't miss requirements that ARE in the PRD
- Produces more focused, coherent outputResults comparison (same PRD):
| Metric | Default | Full Context |
|--------|---------|--------------|
| Epics | 11 | 8 |
| Tasks | 65 | 49 |
| Subtasks | 264 | 202 |
Fewer items with full context = less redundancy, more focused on actual requirements.
$3
Standard parse (full context on by default):
`bash
prd-parser parse docs/prd.md
`
Every stage sees the PRD. Best for accuracy and coherence.Preview before committing:
`bash
prd-parser parse docs/prd.md --dry-run
`
See what would be created without actually creating issues.Save checkpoint for manual review:
`bash
prd-parser parse docs/prd.md --save-json draft.json --dry-run
Edit draft.json manually
prd-parser parse --from-json draft.json
`Human-in-the-loop for large/complex PRDs:
`bash
prd-parser parse docs/prd.md --interactive
`
Review and edit epics before task generation.Quick parse for small PRDs:
`bash
prd-parser parse docs/prd.md --single-shot
`
Faster single LLM call. Works well for PRDs under 300 lines.Cost-optimized for large PRDs:
`bash
prd-parser parse docs/prd.md \
--epic-model claude-opus-4-20250514 \
--task-model claude-sonnet-4-20250514 \
--subtask-model claude-3-5-haiku-20241022
`
Use Opus for epics (complex analysis), Sonnet for tasks, Haiku for subtasks (fast, cost-effective).Maximum validation:
`bash
prd-parser parse docs/prd.md --validate
`
Full context + validation pass to catch gaps.Debug/iterate on structure:
`bash
prd-parser parse docs/prd.md --save-json iter1.json --dry-run
Review iter1.json, note issues
prd-parser parse docs/prd.md --save-json iter2.json --dry-run
Compare, pick the better one
prd-parser parse --from-json iter2.json
`$3
Use
--validate to run a final review that checks for gaps in the generated plan:`bash
prd-parser parse ./prd.md --validate
`This asks the LLM to review the complete plan and identify:
- Missing setup/initialization tasks
- Backend without UI to test it
- Dependencies not installed
- Acceptance criteria that can't be verified
- Tasks in wrong order
Example output:
`
ā Plan validation passed - no gaps found
`
or
`
ā Plan validation found gaps:
⢠No task to install dependencies after adding @clerk/nextjs
⢠Auth API built but no login page to test it
`$3
By default, prd-parser runs an automatic review pass after generation that checks for and fixes structural issues:
- Missing "Project Foundation" epic as Epic 1 (setup should come first)
- Feature epics not depending on Epic 1 (all work depends on setup)
- Missing setup tasks in foundation epic
- Incorrect dependency chains (setup ā backend ā frontend)
`bash
Review is on by default
prd-parser parse ./prd.mdSee: "Reviewing structure..."
See: "ā Review fixed issues: Added Project Foundation epic..."
Or: "ā Review passed - no changes needed"
Disable if you want raw output
prd-parser parse ./prd.md --no-review
`$3
For human-in-the-loop review during generation:
`bash
prd-parser parse docs/prd.md --interactive
`In interactive mode, you'll review epics after Stage 1 before task generation continues:
`
=== Stage 1 Complete: 4 Epics Generated ===Proposed Epics:
1. Project Foundation (depends on: none)
Initialize Next.js, Convex, Clerk setup
2. Voice Infrastructure (depends on: 1)
Telnyx phone system integration
3. AI Conversations (depends on: 1)
LFM 2.5 integration for call handling
4. CRM Integration (depends on: 1)
Follow Up Boss sync
[Enter] continue, [e] edit in $EDITOR, [r] regenerate, [a] add epic:
`Options:
- Enter - Accept epics and continue to task generation
- e - Open epics in your
$EDITOR for manual editing
- r - Regenerate epics from scratch
- a - Add a new epicInteractive mode skips the automatic review pass since you are the reviewer.
$3
For full manual control over the generated structure:
Step 1: Generate Draft
`bash
prd-parser parse docs/prd.md --save-json draft.json --dry-run
`Step 2: Review and Edit
Open
draft.json in your editor. You can:
- Reorder epics (change array order)
- Add/remove epics, tasks, or subtasks
- Fix dependencies
- Adjust priorities and estimatesStep 3: Create from Edited Draft
`bash
prd-parser parse --from-json draft.json
`The PRD file argument is optional when using
--from-json.Auto-Recovery: If creation fails mid-way, prd-parser saves a checkpoint to
/tmp/prd-parser-checkpoint.json. Retry with:
`bash
prd-parser parse --from-json /tmp/prd-parser-checkpoint.json
`Refining Issues After Generation
After parsing, you may find issues that are misaligned with your product vision. The
refine command lets you correct an issue and automatically propagate fixes to related issues.$3
`bash
Correct an epic that went off-track
prd-parser refine test-e6 --feedback "RealHerd is voice-first lead intelligence, not a CRM with pipeline management"Preview changes without applying
prd-parser refine test-e3t2 --feedback "Should use OpenRouter, not direct OpenAI" --dry-runInclude PRD for better context
prd-parser refine test-e6 --feedback "Focus on conversation insights" --prd docs/prd.md
`$3
1. Analyze: LLM identifies wrong concepts in the target issue (e.g., "pipeline tracking", "deal stages")
2. Correct: Generates corrected version with right concepts ("conversation insights", "activity visibility")
3. Scan: Searches ALL issues (across all epics) for the same wrong concepts
4. Propagate: Regenerates affected issues with correction context
5. Update: Applies changes via
bd update$3
| Flag | Default | Description |
|------|---------|-------------|
|
--feedback, -f | required | What's wrong and how to fix it |
| --cascade | true | Also update children of target issue |
| --scan-all | true | Scan all issues for same misalignment |
| --dry-run | false | Preview changes without applying |
| --prd | | Path to PRD file for context |$3
`
$ prd-parser refine test-e6 --feedback "RealHerd is voice-first, not CRM"Loading issue test-e6...
Found: Brokerage Dashboard & Reporting
Analyzing misalignment...
Identified misalignment:
- pipeline tracking
- deal management
- contract stages
Corrected version:
Title: Agent Activity Dashboard & Conversation Insights
Description: Real-time visibility into agent conversations...
Scanning for affected issues...
Found 3 children
Found 2 issues with similar misalignment
--- Changes to apply ---
Target: test-e6
+ test-e6t3: Pipeline Overview Component
+ test-e6t4: Deal Tracking Interface
+ test-e3t5: CRM Pipeline Sync
Applying corrections...
ā Updated test-e6
ā Updated test-e6t3
ā Updated test-e6t4
ā Updated test-e3t5
--- Summary ---
Updated: 1 target + 4 related issues
`LLM Providers
$3
prd-parser auto-detects installed LLM CLIs - no API keys needed:
`bash
If you have Claude Code installed, it just works
prd-parser parse ./prd.mdIf you have Codex installed, it just works
prd-parser parse ./prd.md
`$3
1. Claude Code CLI (
claude) - Preferred, already authenticated
2. Codex CLI (codex) - Already authenticated
3. Anthropic API - Fallback if ANTHROPIC_API_KEY is set$3
`bash
Force specific provider
prd-parser parse ./prd.md --llm claude-cli
prd-parser parse ./prd.md --llm codex-cli
prd-parser parse ./prd.md --llm anthropic-apiSpecify model
prd-parser parse ./prd.md --llm claude-cli --model claude-sonnet-4-20250514
prd-parser parse ./prd.md --llm codex-cli --model o3
`Output Options
$3
Creates issues directly in the current beads-initialized project:
`bash
bd init --prefix myproject
prd-parser parse ./prd.md --output beads
bd list # See created issues
`$3
Export to JSON for inspection or custom processing:
`bash
Write to file
prd-parser parse ./prd.md --output json --output-path tasks.jsonWrite to stdout (pipe to other tools)
prd-parser parse ./prd.md --output json | jq '.epics[0].tasks'
`The Guardrails System
prd-parser isn't just a prompt wrapper. It uses Go structs as guardrails to enforce valid output:
`go
type Epic struct {
TempID string json:"temp_id"
Title string json:"title"
Description string json:"description"
Context interface{} json:"context"
AcceptanceCriteria []string json:"acceptance_criteria"
Testing TestingRequirements json:"testing"
Tasks []Task json:"tasks"
DependsOn []string json:"depends_on"
}type TestingRequirements struct {
UnitTests *string
json:"unit_tests,omitempty"
IntegrationTests *string json:"integration_tests,omitempty"
TypeTests *string json:"type_tests,omitempty"
E2ETests *string json:"e2e_tests,omitempty"
}
`The LLM MUST produce output that matches these structs. Missing required fields? Validation fails. Wrong types? Parse fails. This ensures every PRD produces consistent, complete issue structures.
Architecture
`
prd-parser/
āāā cmd/ # CLI commands (Cobra)
ā āāā parse.go # Main parse command
āāā internal/
ā āāā core/ # Core types and orchestration
ā ā āāā types.go # Hierarchical structs (guardrails)
ā ā āāā prompts.go # Single-shot system/user prompts
ā ā āāā stage_prompts.go # Multi-stage prompts (Stages 1-3)
ā ā āāā parser.go # Single-shot LLM ā Output orchestration
ā ā āāā multistage.go # Multi-stage parallel parser
ā ā āāā validate.go # Validation pass logic
ā āāā llm/ # LLM adapters
ā ā āāā adapter.go # Interface definition
ā ā āāā claude_cli.go # Claude Code CLI adapter
ā ā āāā codex_cli.go # Codex CLI adapter
ā ā āāā anthropic_api.go # API fallback
ā ā āāā detector.go # Auto-detection logic
ā ā āāā multistage_generator.go # Multi-stage LLM calls
ā āāā output/ # Output adapters
ā āāā adapter.go # Interface definition
ā āāā beads.go # beads issue tracker
ā āāā json.go # JSON file output
āāā tests/ # Unit tests
`Adding Custom Adapters
$3
`go
type Adapter interface {
Name() string
IsAvailable() bool
Generate(ctx context.Context, systemPrompt, userPrompt string) (*core.ParseResponse, error)
}
`$3
`go
type Adapter interface {
Name() string
IsAvailable() (bool, error)
CreateItems(response core.ParseResponse, config Config) (CreateResult, error)
}
``- beads - Git-backed issue tracker for AI-driven development
- Claude Code - Claude's official CLI with beads integration
MIT