AI-guided LLM optimization. Install ā Tell Claude 'Read .claude/agents/iris.md' ā Claude becomes your optimization guide. DSPy prompts, Ax hyperparameters, local LLMs, federated learning. You talk, Iris handles the rest.
npm install @foxruv/irisTalk to Claude. It handles the rest.
``
You: "Help me optimize my prompts"
Iris: "I scanned your project. Found 3 AI components.
Best candidate: summarizer.ts (+20% potential).
Setting up DSPy... Done.
Running optimization...
š Accuracy: 72% ā 89%
Want me to apply the changes?"
`
No CLI commands. No config files. No learning curve. Just results.

---
`bashStep 1: Install dependencies
pip install dspy-ai ax-platform
python optimize.py
ā±ļø Time: 2-4 hours per component
š Required: DSPy expertise, Python scripting
š§ Retained: Nothing (starts over each time)
---
$3
`
You: "Optimize my summarizer"Iris: "On it."
ā
Detected TypeScript project
ā
Found summarizer.ts
ā
Installing @ts-dspy/core...
ā
Scanning for training examples...
ā
Running 30-trial optimization...
ā
Best result: 89% accuracy (+17%)
"Here's what I changed:
- Restructured prompt for clarity
- Added 3 few-shot examples
- Temperature: 1.0 ā 0.7
Apply these changes?"
You: "Yes"
Iris: "Done. Pattern saved for future projects."
`ā±ļø Time: 30 seconds
š Required: Nothing
š§ Retained: Everything (learns and improves)
---
$3
`
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā BEFORE IRIS AFTER IRIS ā
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā¤
ā Install DSPy/Ax manually ā Auto-installed ā
ā Write Python scripts ā Just talk ā
ā Read 50 pages of docs ā Zero learning curve ā
ā Collect examples manually ā Auto-detected ā
ā Configure optimizers ā Smart defaults ā
ā Parse output yourself ā Plain English results ā
ā Apply changes manually ā One-click apply ā
ā Forget what worked ā Patterns saved forever ā
ā Start over each project ā Knowledge transfers ā
ā No validation ā AI Council approval ā
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā¤
ā 2-4 hours ā 30 seconds ā
ā Expert required ā Anyone can do it ā
ā Knowledge lost ā Knowledge compounds ā
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
`---
ā” Quick Start
Just type this into Claude Code:
`
Install @foxruv/iris@latest, find the agent and skill files it created, and follow the steps to help me optimize my AI
`That's it. Claude installs, reads the agent, and becomes your optimization guide.
Or manually:
`bash
npm install @foxruv/iris
`Then tell Claude:
Read .claude/agents/iris.md and help me optimize---
š§ What Iris Handles (So You Don't Have To)
| You Used To... | Now You Just Say... |
|----------------|---------------------|
|
pip install dspy-ai then write scripts | "Optimize my prompts" |
| pip install ax-platform then configure trials | "Find the best temperature" |
| Manually track what worked | "What patterns work best?" |
| Copy settings between projects | "Use what worked before" |
| Read docs for every tool | "Set up local LLM" |
| Write YAML configs | "Configure optimization" |Iris installs, configures, runs, and applies. You just approve.
---
š§ What's Under The Hood
Iris orchestrates powerful tools without you touching them:
$3
`
Without Iris:
1. pip install dspy-ai
2. Learn DSPy API
3. Write training script
4. Collect examples
5. Run MIPROv2 optimizer
6. Parse output
7. Apply to codeWith Iris:
"Optimize my classifier"
ā Done. +15% accuracy.
`$3
`
Without Iris:
1. pip install ax-platform
2. Define search space
3. Configure Bayesian optimization
4. Run 50+ trials
5. Analyze results
6. Apply best paramsWith Iris:
"Find the best settings"
ā Done. Temperature 0.7, top_p 0.9.
`$3
`
Without Iris:
- Every optimization starts from scratch
- Repeat same experiments
- Forget what workedWith Iris:
- Remembers every optimization
- "Use what worked on my last project"
- Patterns compound over time
`---
š The Learning Loop
Iris gets smarter the more you use it:
`
Week 1: "Optimize my summarizer"
ā Runs 30 trials, finds best settings
ā Stores pattern: "structured output + temp 0.7 = +17%"Week 2: "Optimize my classifier"
ā Recognizes similar task
ā Starts from proven patterns
ā Only 10 trials needed
ā +18% accuracy
Week 3: New project, same task type
ā "Based on your history, I recommend..."
ā 5 trials to confirm
ā Instant optimization
`First optimization: 30 trials. Later: 5 trials. Same results.
---
š Federated Learning (Optional)
Share what works across projects and teams:
`
You: "Enable federated learning"Iris: "Done. Now:
ā
Patterns from Project A help Project B
ā
Team discoveries benefit everyone
ā
Your data stays private (only patterns shared)
Disable anytime with 'turn off federated learning'"
`Your whole portfolio gets smarter together.
---
šļø AI Council - For High-Stakes Decisions
Before deploying to production, get a second opinion:
`
You: "Deploy my optimized prompt"Iris: "This goes to 10k requests/day. Let me convene the Council..."
Council: "šļø APPROVED (87% confidence)
š§ PatternMaster: ā
Matches successful patterns
āļø PerformanceJudge: ā
All metrics improved
š”ļø SafetyValidator: ā ļø Add rollback trigger
Recommendation: Deploy to 10% first, monitor 24h"
Iris: "Council approved with safeguards. Deploying..."
`6 specialized agents validate your changes before they go live.
---
šļø Architecture
`
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā YOU ā
ā "Optimize my summarizer" ā
āāāāāāāāāāāāāāāāā¬āāāāāāāāāāāāāāāāāā
ā
ā¼
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā IRIS AGENT ā
ā Understands intent, plans ā
āāāāāāāāāāāāāāāāā¬āāāāāāāāāāāāāāāāāā
ā
āāāāāāāāāāāāāāāāāāāāāāāāāāāā¼āāāāāāāāāāāāāāāāāāāāāāāāāāā
ā ā ā
ā¼ ā¼ ā¼
āāāāāāāāāāāāāāāāāāāāāāā āāāāāāāāāāāāāāāāāāāāāāā āāāāāāāāāāāāāāāāāāāāāāā
ā DSPy (Stanford) ā ā Ax (Meta) ā ā AgentDB ā
ā Prompt Optimizer ā ā Hyperparameter ā ā Learning Store ā
ā ā ā Tuning ā ā ā
ā MIPROv2, COPRO, ā ā Bayesian search ā ā Patterns, history ā
ā BootstrapFewShot ā ā 352x faster ā ā Cross-project ā
āāāāāāāāāāāāāāāāāāāāāāā āāāāāāāāāāāāāāāāāāāāāāā āāāāāāāāāāāāāāāāāāāāāāā
ā ā ā
āāāāāāāāāāāāāāāāāāāāāāāāāāāā¼āāāāāāāāāāāāāāāāāāāāāāāāāāā
ā
ā¼
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā AI COUNCIL ā
ā (High-stakes validation) ā
ā ā
ā š§ PatternMaster (2.0x) ā
ā āļø PerformanceJudge (2.0x) ā
ā š¬ PromptScientist (2.0x) ā
ā š TransferTester (1.5x) ā
ā š”ļø SafetyValidator (1.5x) ā
ā šÆ Consensus ā APPROVE/REJECT ā
āāāāāāāāāāāāāāāāā¬āāāāāāāāāāāāāāāāāā
ā
ā¼
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā YOUR CODE ā
ā Optimized & validated ā
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā FEDERATED LEARNING ā
ā ā
ā Project A āāāāāāāāāāŗ Supabase āāāāāāāāāāŗ Project B ā
ā ā ā ā ā
ā Patterns Shared Patterns ā
ā learned intelligence applied ā
ā ā
ā "Structured prompts + temp 0.7 = +17% accuracy" ā
ā ā Now available to ALL your projects ā
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
`---
š Local LLM Support
Use Ollama, llama.cpp, or vLLM with automatic cloud fallback:
`
You: "Set up local LLM"Iris: "I'll configure Ollama with Claude fallback:
ā
Simple tasks ā Local (fast, free, private)
ā
Complex tasks ā Cloud (better reasoning)
ā
Local fails ā Automatic cloud fallback
Done. Best of both worlds."
`---
š¦ What Gets Installed
After
npm install @foxruv/iris:`
.claude/
āāā agents/
ā āāā iris/iris.md ā Your optimization guide
ā āāā council/council.md ā 6-agent validation system
āāā skills/
āāā iris/iris.md ā Detailed commands (for Claude, not you)
`Tell Claude to read the agent. It handles everything else.
---
š¬ Example Conversations
$3
`
You: "My summarizer outputs are inconsistent"Iris: "I'll optimize for consistency. Need 5-10 examples of good outputs.
[You provide examples]
Running DSPy optimization...
š Consistency improved 40%
- Added structured output format
- Lowered temperature to 0.7
- Added 3 few-shot examples
Apply changes?"
`$3
`
You: "What temperature should I use?"Iris: "I'll run Bayesian optimization to find out...
Testing: 0.3, 0.5, 0.7, 0.9, 1.0
Results:
- Accuracy peaks at 0.7
- Creativity peaks at 0.9
- Your task (classification) ā 0.7 recommended
Apply temperature 0.7?"
`$3
`
You: "I optimized my summarizer. Will it help my other projects?"Iris: "Let me check pattern similarity...
ā
Blog post generator: 78% similar ā likely +12% improvement
ā
Email writer: 65% similar ā likely +8% improvement
ā ļø Code reviewer: 23% similar ā probably won't transfer
Apply to blog post generator?"
`$3
`
You: "Deploy to production"Iris: "This serves 50k requests/day. Running Council validation...
šļø Council Decision: APPROVED (with conditions)
Conditions:
1. Start with 10% traffic
2. Add rollback if accuracy drops >5%
3. Monitor 24 hours before full deployment
Proceed with safeguards?"
`---
š ļø Commands (You Never Run These)
Iris runs these behind the scenes. You just talk.
`bash
Iris runs these silently:
npx iris discover # Find optimization targets
npx iris optimize --strategy dspy --target src/summarize.ts
npx iris council analyze # Validate changes
npx iris federated sync # Share patterns
npx iris apply --target src/summarize.tsYou never type these. You just say:
"Optimize my summarizer"
"Validate before deploying"
"Share patterns with my team"
`---
šÆ Perfect For
- Solo developers - Get expert-level optimization without the expertise
- Teams - Share what works, stop repeating experiments
- Production apps - Council validation before deployment
- Multiple projects - Patterns transfer automatically
- Learning - Understand what Iris does by asking "show me what you're doing"
---
š More Resources
- Quick Start Guide
- Credentials Guide
- GitHub
---
š Get Started
Just type this into Claude Code:
`
Install @foxruv/iris@latest, find the agent and skill files it created, and help me optimize my AI
``Claude handles everything. Your AI gets better. You just talk.