A coding agent friendly reporter for playwright, so that your coding agent finally really understands what is going wrong
npm install @zenai/playwright-coding-agent-reporter


A specialized Playwright reporter designed for AI/LLM coding agents that provides minimal, structured test failure reporting to maximize context efficiency and actionable insights. Works well with coding agents such as Claude Code, Codex, Aider, Roo Code, and Cursor.
- ๐ฏ Error-Focused: Captures complete failure context including exact line numbers, stack traces, and page state
- ๐ธ Rich Context: Includes console errors, network failures, and screenshots
- ๐ Smart Selector Suggestions: Uses Levenshtein distance to suggest similar selectors when elements aren't found
- ๐ Markdown Reports: Clean, structured markdown output for easy parsing by LLMs
- โก Performance Optimized: Minimal overhead, async file operations
- ๐ง Highly Configurable: Customize what data to capture and report
``bash`
npm install --save-dev @zenai/playwright-coding-agent-reporter
Add the reporter to your Playwright configuration:
`typescript
// playwright.config.ts
import { defineConfig } from '@playwright/test';
export default defineConfig({
reporter: [
[
'@zenai/playwright-coding-agent-reporter',
{
outputDir: 'test-report-for-coding-agents',
includeScreenshots: true, // Include screenshots in reports when available
silent: false, // Show helpful console output
singleReportFile: true, // All errors in one file
},
],
],
use: {
// IMPORTANT: Configure Playwright to take screenshots on failure
screenshot: 'only-on-failure', // This tells Playwright WHEN to take screenshots
video: 'off', // Turn off video by default for efficiency
},
});
`
#### Screenshot Configuration
Important: Screenshot capture is controlled at two levels:
1. Playwright Level (use.screenshot): Controls WHEN screenshots are taken'off'
- - No screenshots'on'
- - Always take screenshots'only-on-failure'
- - Only on test failure (recommended)
2. Reporter Level (includeScreenshots): Controls whether captured screenshots are included in reportstrue
- - Include screenshots in error reports when they exist (default)false
- - Don't include screenshots in reports, even if Playwright captured them
For optimal debugging, use:
- screenshot: 'only-on-failure' in Playwright config (to capture screenshots)includeScreenshots: true
- in reporter config (to include them in reports)
| Option | Type | Default | Description |
| ---------------------- | ------- | --------------------------------- | --------------------------------------------------------------------------------------- |
| outputDir | string | 'test-report-for-coding-agents' | Directory for report output |includeScreenshots
| | boolean | true | Include screenshots in error reports when available (see note below) |includeConsoleErrors
| | boolean | true | Capture console errors and warnings |includeNetworkErrors
| | boolean | true | Capture network request failures |includeVideo
| | boolean | false | Include video references in reports when available (Playwright must have video enabled) |silent
| | boolean | false | Suppress per-test pass output; still shows summary |maxErrorLength
| | number | 5000 | Maximum error message length |singleReportFile
| | boolean | true | Generate single consolidated error-context.md file |capturePageState
| | boolean | true | Capture page state on failure (URL, title, available selectors, visible text) |verboseErrors
| | boolean | true | Show detailed error list after summary. Set to false for only concise summary |maxInlineErrors
| | number | 5 | Maximum number of errors to show in console output |showCodeSnippet
| | boolean | true | Show code snippet at error location |
When tests fail, the reporter generates a consolidated report and per-test artifacts:
``
test-report-for-coding-agents/
โโโ all-failures.md # Consolidated failure report (all failures)
โโโ basic-reporter-features-failing-test-element-not-found-7/
โ โโโ report.md # Detailed error report for this test
โ โโโ screenshot.png # Screenshot at failure (if enabled)
โโโ basic-reporter-features-failing-test-assertion-failure-6/
โ โโโ report.md # Detailed error report for this test
โ โโโ screenshot.png # Screenshot at failure (if enabled)
โโโ timeout-handling-timeout-waiting-for-element-shows-enhanced-context-7/
โ โโโ report.md # Detailed error report with timeout context
โ โโโ screenshot.png # Screenshot at timeout
โโโ ...
Notes:
- all-failures.md - Consolidated report containing all test failures with summaries and links to individual reportsreport.md
- - Per-test detailed error report with full context, stack traces, and debugging informationscreenshot.png
- - Visual state captured at the moment of failure (when screenshots are enabled)
- Folder names are generated from suite and test names with test index suffix for uniqueness
Each failure report includes:
- Test Location: Exact file path and line number
- Error Details: Complete error message and stack trace with enhanced timeout context
- Page Context: Current URL, page title, screenshot reference
- Available Selectors: Sorted by relevance when element not found
- Action History: Recent test actions before failure
- Console Output: Captured JavaScript errors and warnings
- Network Errors: Failed network requests
- Screenshots: Visual state at failure with direct links
- HTML Context: Relevant HTML around failed selectors
- Quick Links: Navigation to individual test folders (in consolidated report)
The reporter shows a concise summary at the end of test execution:
`
Running 16 tests using 4 workers
ยทFยทFยทF-FFFFยทFFFFF
E2E Test Run: 1/16 passed (15 failed/skipped) in 3.1s
FAILED (15):
โ reporter-demo.spec.ts:16 - Reporter Core Features - assertion failure with context - Expected "Expected Title", got "Actual Title"
โ reporter-demo.spec.ts:23 - Reporter Core Features - console errors capture - Uncaught exception
โ reporter-demo.spec.ts:9 - Reporter Core Features - element not found - suggestions - Element not found: [.non-existent-selector]
See for failed test details: ./test-report-for-coding-agents/
`
When verboseErrors: true (default), detailed error information follows:
`$3
## 1) test/e2e/reporter-demo.spec.ts:20:7 โบ Reporter Core Features โบ assertion failure with context
Duration: 583ms
### Error
Error: expect(received).toBe(expected) // Object.is equality
Expected: "Expected Title"
Received: "Actual Title"
### Code Location (TypeScript)
18 |
19 | const title = await page.locator('h1').textContent();
> 20 | expect(title).toBe('Expected Title');
| ^
21 | });
### ๐ Page State When Failed
URL: data:text/html,
๐ Full Error Context: test-report-for-coding-agents/reporter-core-features-assertion-failure-with-context-4/report.md
`
The new concise summary format provides:
- One-line overview: Shows passed/failed/skipped counts and total duration
- Failed tests only: Lists only failed tests with file:line, test name, and brief error
- Skipped tests: Shows skipped tests when present
- Report directory: Points to detailed reports using your configured outputDir
To show only the summary without detailed errors, set verboseErrors: false in your configuration
๐ Detailed error report: test-report-for-coding-agents/all-failures.md
``
This reporter is optimized for AI coding assistants (Claude Code, Codex, Aider, Roo Code, Cursor, etc.). When tests fail:
1. Single File Context: The AI reads one all-failures.md file containing all failures
2. Structured Information: Each failure includes exact line numbers, error messages, and stack traces
3. Visual Context: Screenshots and smart selector suggestions provide debugging insights
4. Immediate Debugging: Console and network errors are captured inline
5. Quick Reproduction: Ready-to-run commands for each failing test
The consolidated format minimizes token usage while maximizing debugging information.
`typescript
import { test, expect } from '@playwright/test';
test('user can complete checkout', async ({ page }) => {
// The reporter will capture all of this context on failure
await page.goto('/shop');
// Console errors are automatically captured
await page.evaluate(() => {
console.error('Payment processing failed');
});
// Network failures are tracked
await page.route('**/api/checkout', (route) => route.abort());
// Screenshots and available selectors captured on failure
await expect(page.locator('.checkout-success')).toBeVisible();
});
``
`bash`
npm run build
`bashRun unit tests (Vitest - no browser required)
npm run test:unit
$3
`bash
npm run watch
`Why Use This Reporter?
The default Playwright reporter surfaces the error, but often lacks enough surrounding context for a coding model to understand what actually went wrong and what the page state was at failure time. It's hard for coding agents to debug with just the error text.
This reporter focuses on actionable context for agents:
- Dot progress output: Concise dot progress with immediate failed test listing, detailed sections only for failures
- Page state snapshot: URL, title, visible text, nearby/available selectors, recent actions
- Structured errors: Consistent formatting with code snippets and stack traces
- Repro commands: Ready-to-run commands per failing test
- Markdown reports: Single consolidated file plus per-test reports for targeted review
Comparison: Standard vs Coding Agent Reporter
Here's the same failing test with both reporters - notice how our reporter provides solution context:
$3
`
Error: expect(locator).toBeVisible() failedLocator: locator('#submit-button')
Expected: visible
Received:
Timeout: 2000ms
Call log:
- Expect "toBeVisible" with timeout 2000ms
- waiting for locator('#submit-button')
`$3
Console Output:
`
1) element not found - selector suggestions
Duration: 2219ms$3
Error: expect(locator).toBeVisible() failedLocator: locator('#submit-button')
Expected: visible
Received:
Timeout: 2000ms
$3
11 |
12 | // Reporter should suggest similar selectors
> 13 | await expect(page.locator('#submit-button')).toBeVisible();
| ^
14 | });$3
URL: data:text/html,
Screenshot: Saved to screenshot.png$3
2025-09-08T17:53:07.848Z - โ Navigating to: data:text/html,
2025-09-08T17:53:07.859Z - โ DOM ready
2025-09-08T17:53:07.860Z - โ Page loaded$3
#submit-btn
button:has-text("Submit")$3
Submit๐ Full Error Context: /path/to/detailed-report.md
`Key Differences:
- โ
Exact code location with context lines
- โ
Available selectors - shows
#submit-btn is available (typo fix!)
- โ
Action history - what happened before the failure
- โ
Page context - URL and visible content
- โ
Structured markdown reports - for detailed analysisThe Result: AI agents can immediately see the typo (
#submit-button vs #submit-btn) and suggest the fix!Contributing
This project uses semantic-release for automated releases.
- Prefer squash merges. The pull request title should follow Conventional Commits; individual commit messages do not need to.
- The PR title drives the release notes and version bump.
$3
Releases are fully automated via GitHub Actions:
1. Merge to main: Use squash merge; ensure the PR title follows Conventional Commits
2. Automatic versioning: semantic-release analyzes the PR title and determines version bump
3. NPM publish: Package is automatically published to NPM
4. GitHub Release: Creates GitHub release with changelog
5. Git tags: Creates appropriate version tags
$3
To enable automated publishing:
1. NPM Token: Add
NPM_TOKEN secret to your GitHub repository
- Get token from npmjs.com โ Account Settings โ Access Tokens
- Create "Automation" token with publish permissions
- Add to GitHub: Settings โ Secrets โ Actions โ New repository secret2. GitHub Token:
GITHUB_TOKEN is automatically provided by GitHub Actions3. Branch Protection (optional but recommended):
- Protect
main` branchMIT