Enterprise-grade AI-powered test reporter for Playwright with automatic bug creation, intelligent fix suggestions, auto-healing PRs, and multi-provider support for CI/CD pipelines.
npm install playwright-ai-reporter






Transform test failures into actionable insights with AI-powered analysis and auto-healing
Playwright AI Reporter is an enterprise-grade, production-ready test reporter that combines artificial intelligence with comprehensive test automation workflows. Built on a flexible provider-based architecture, it automatically analyzes test failures, creates detailed bug reports, generates fix suggestions, and can even submit auto-healing pull requestsβall while integrating seamlessly with your existing development tools.
- π§ AI-Powered Analysis - Multiple AI providers (Azure OpenAI, Anthropic Claude, Google Gemini, Mistral, OpenAI) analyze failures and suggest intelligent fixes
- π Plug & Play Architecture - Swap bug trackers, databases, AI providers, and notification systems without code changes
- π Auto-Healing Tests - Automatically generate and submit PRs with AI-suggested fixes for flaky or failing tests
- π Enterprise Integration - Native support for GitHub, Azure DevOps, Jira, MySQL, SQLite, SMTP, and more
- π¨ Rich Reporting - Colorized console output, comprehensive metrics, historical analysis, and build integration
- β‘ Production-Ready - TypeScript, fully tested, extensive documentation, CI/CD workflows included
> π‘ Perfect for - CI/CD pipelines β’ Enterprise test automation β’ Multi-team projects β’ Flaky test management β’ Test debugging at scale
---
- Features
- Architecture
- Quick Start
- Installation
- Configuration
- Provider Support
- Usage Examples
- Output Examples
- FAQs
- Documentation
- Contributing
- License
---
- β
Colorized console output (Passed, Failed, Retries, Skipped)
- π Comprehensive test metrics and statistics
- π― Slowest test identification and ranking
- β±οΈ Average test duration analysis
- π Test history tracking and comparison
- ποΈ CI/CD integration with build information
- π Interactive HTML Report: Self-contained HTML dashboard with charts, test details, and AI fix suggestions
- π§ Multi-AI Provider Support: Azure OpenAI, OpenAI, Anthropic (Claude), Google AI (Gemini), Mistral AI
- π§ Automatic Fix Suggestions: AI analyzes failures and suggests fixes
- π Context-Aware Analysis: Includes test code, error details, and stack traces
- π Smart Categorization: Automatic error categorization (Timeout, Selector, Network, etc.)
- π‘ Best Practices: Suggestions follow Playwright best practices
- π« Multi-Platform Bug Creation: GitHub Issues, Azure DevOps Work Items, Jira Tickets
- π Rich Bug Details: Test info, error details, AI suggestions, environment data
- π·οΈ Smart Labeling: Automatic labels and priority assignment
- π Integrated Tracking: Links bugs to test runs and failures
- π Automatic PR Creation: Generate PRs with AI-suggested fixes (set generatePR: true)
- πΏ Branch Management: Auto-create topic branches (autofix/test-name-timestamp)
- πΎ Smart Commits: Commit fixes to topic branch with detailed messages
- π€ Push & PR: Push changes and create pull request from topic to base branch
- π Rich PR Descriptions: Include error analysis, test details, and fix rationale
- π― Draft PRs: Created as drafts for mandatory code review
- π·οΈ Auto Labels: auto-fix, test-failure, ai-generated labels
- βοΈ Platform Support: GitHub and Azure DevOps
- π Test Run Tracking: Store complete test run metadata (environment, branch, commit, totals, duration)
- π Result History: Track individual test results over time with full details
- π Failure Analysis: Query and analyze failure patterns with indexed searches
- ποΈ Build Integration: Link results to CI/CD builds with metadata
- πΎ Multi-Database: SQLite (file-based), MySQL, PostgreSQL support
- ποΈ Schema: 2 tables (test_runs, test_results) with 4 performance indexes
---
``mermaid
graph TB
subgraph "Playwright Test Runner"
Tests[Test Execution]
end
subgraph "AI Test Reporter"
Reporter[Reporter Core]
Registry[Provider Registry]
Workflow[Test Workflow Engine]
end
subgraph "AI Providers"
Azure[Azure OpenAI]
OpenAI[OpenAI]
Anthropic[Anthropic Claude]
Google[Google Gemini]
Mistral[Mistral AI]
end
subgraph "Bug Trackers"
GitHub[GitHub Issues]
ADO[Azure DevOps]
Jira[Jira]
end
subgraph "Databases"
SQLite[SQLite]
MySQL[MySQL]
Postgres[PostgreSQL]
end
subgraph "Notifications"
Email[Email/SMTP]
Slack[Slack]
Teams[MS Teams]
end
subgraph "PR Providers"
GHPR[GitHub PRs]
ADOPR[Azure Repos PRs]
end
Tests --> Reporter
Reporter --> Registry
Registry --> Workflow
Workflow --> Azure
Workflow --> OpenAI
Workflow --> Anthropic
Workflow --> Google
Workflow --> Mistral
Workflow --> GitHub
Workflow --> ADO
Workflow --> Jira
Workflow --> SQLite
Workflow --> MySQL
Workflow --> Postgres
Workflow --> Email
Workflow --> Slack
Workflow --> Teams
Workflow --> GHPR
Workflow --> ADOPR
style Reporter fill:#4CAF50
style Registry fill:#2196F3
style Workflow fill:#FF9800
`
The reporter uses a provider-based architecture for maximum flexibility:
`mermaid
graph LR
subgraph "Application Layer"
Reporter[Test Reporter]
Utils[Utilities]
end
subgraph "Provider Registry"
Registry[Provider Registry
Singleton Manager]
end
subgraph "Factory Layer"
AIFactory[AI Factory]
BugFactory[Bug Tracker Factory]
DBFactory[Database Factory]
NotifyFactory[Notification Factory]
PRFactory[PR Factory]
end
subgraph "Provider Interfaces"
IAI[IAIProvider]
IBug[IBugTrackerProvider]
IDB[IDatabaseProvider]
INotify[INotificationProvider]
IPR[IPRProvider]
end
subgraph "Concrete Implementations"
AzureAI[Azure OpenAI]
OpenAI[OpenAI]
GitHubBug[GitHub Issues]
SQLite[SQLite]
EmailNotify[Email]
GitHubPR[GitHub PRs]
end
Reporter --> Registry
Utils --> Registry
Registry --> AIFactory
Registry --> BugFactory
Registry --> DBFactory
Registry --> NotifyFactory
Registry --> PRFactory
AIFactory --> IAI
BugFactory --> IBug
DBFactory --> IDB
NotifyFactory --> INotify
PRFactory --> IPR
IAI --> AzureAI
IAI --> OpenAI
IBug --> GitHubBug
IDB --> SQLite
INotify --> EmailNotify
IPR --> GitHubPR
style Registry fill:#FF6B6B
style IAI fill:#4ECDC4
style IBug fill:#4ECDC4
style IDB fill:#4ECDC4
style INotify fill:#4ECDC4
style IPR fill:#4ECDC4
`
`mermaid
sequenceDiagram
participant PT as Playwright Test
participant R as Reporter
participant AI as AI Provider
participant BT as Bug Tracker
participant PR as PR Provider
participant DB as Database
participant N as Notification
PT->>R: Test Failed
R->>R: Categorize Error
R->>R: Extract Test Code
R->>AI: Generate Fix Suggestion
AI-->>R: AI Analysis & Fix
par Parallel Operations
R->>BT: Create Bug/Issue
BT-->>R: Bug Created
and
R->>DB: Save Test Result
DB-->>R: Result Saved
end
alt Auto-Healing Enabled
R->>PR: Create Fix PR
PR-->>R: PR Created
end
R->>N: Send Notification
N-->>R: Notification Sent
R->>PT: Report Complete
`
| Component | Description |
| --------------------- | ------------------------------------------------------------- |
| Reporter | Main entry point implementing Playwright's Reporter interface |
| Provider Registry | Centralized provider management with lazy initialization |
| AI Providers | Multiple AI service implementations for fix suggestions |
| Bug Trackers | Issue/ticket creation across platforms |
| Databases | Test result storage and historical analysis |
| PR Providers | Automated pull request creation |
| Notifications | Alert delivery across channels |
| Factories | Provider instantiation with configuration |
| Workflow Engine | Orchestrates the test failure handling process |
- π Provider Independence - Not locked into any single service
- π Factory Pattern - Clean, standardized provider creation
- β‘ Lazy Initialization - Resources loaded only when needed
- π‘οΈ Type Safety - Full TypeScript support
- π§ͺ Testable - Easy mocking for unit tests
- π¦ Modular - Import only what you need
---
- Node.js 18 or higher
- Playwright 1.51 or higher
- An AI provider API key (Azure OpenAI, OpenAI, Anthropic, Google AI, or Mistral)
`bashInstall the reporter
npm install playwright-ai-reporter --save-dev
$3
Copy one of the pre-configured environment files from the examples:
`bash
GitHub + Mistral AI + SQLite
cp examples/env-configs/.env.github-stack .envOR Azure DevOps + Azure OpenAI + MySQL
cp examples/env-configs/.env.azure-stack .envOR Jira + OpenAI + SQLite
cp examples/env-configs/.env.openai-jira .envOR Claude AI only (minimal setup)
cp examples/env-configs/.env.anthropic-minimal .env
`$3
Edit your
.env file with your API keys and settings:`env
AI Provider (choose one)
AI_PROVIDER=mistral
MISTRAL_API_KEY=your-api-key-hereBug Tracker (optional)
BUG_TRACKER_PROVIDER=github
GITHUB_TOKEN=ghp_your_personal_access_token
GITHUB_OWNER=your-org
GITHUB_REPO=your-repoDatabase (optional)
DATABASE_PROVIDER=sqlite
SQLITE_DATABASE_PATH=./data/test-results.dbPR Provider (optional - for auto-PR generation)
PR_PROVIDER=github
BASE_BRANCH=main
`$3
Update your
playwright.config.ts:`typescript
import {defineConfig} from '@playwright/test';export default defineConfig({
reporter: [
['list'],
[
'playwright-ai-reporter',
{
// Test thresholds
slowTestThreshold: 3,
maxSlowTestsToShow: 5,
// Output
outputDir: './test-results',
showStackTrace: true,
// AI & Automation features
generateFix: true, // Generate AI fix suggestions
createBug: false, // Auto-create bugs for failures
generatePR: false, // Auto-create PRs with fixes
publishToDB: false, // Save to database
sendEmail: false, // Send email notifications
},
],
],
});
`$3
`bash
From the examples folder
cd examples
npm install
npm run validate:config # Check configuration
`$3
`bash
From the examples folder
npm test
`That's it! The reporter will now analyze failures, generate AI-powered fix suggestions, and optionally create bugs, PRs, or store results in a database based on your configuration.
---
π Installation
$3
`bash
npm install playwright-ai-reporter --save-dev
`$3
`bash
yarn add -D playwright-ai-reporter
`$3
`bash
pnpm add -D playwright-ai-reporter
`---
βοΈ Configuration
$3
| Option | Type | Default | Description |
| ------------------------- | --------- | ---------------- | -------------------------------------------------------- |
|
slowTestThreshold | number | 5 | Tests slower than this (seconds) are flagged as slow |
| maxSlowTestsToShow | number | 3 | Maximum number of slow tests to display in report |
| timeoutWarningThreshold | number | 30 | Warn if tests approach this timeout value (seconds) |
| showStackTrace | boolean | true | Include full stack traces in error reports |
| outputDir | string | ./test-results | Directory for JSON output files and AI-generated fixes |
| generateFix | boolean | false | Generate AI-powered fix suggestions (saves to files) |
| createBug | boolean | false | Auto-create bugs for failures (requires bug tracker) |
| generatePR | boolean | false | Auto-create PRs with fixes (requires generateFix=true) |
| publishToDB | boolean | false | Publish test results to database (requires DB provider) |
| sendEmail | boolean | false | Send email notifications (requires email configuration) |$3
#### Generate Fix Only (Default)
`typescript
{ generateFix: true, createBug: false, generatePR: false, publishToDB: false }
// β Creates AI fix suggestions in test-results/fixes/
`#### Create Bugs for Failures
`typescript
{ generateFix: false, createBug: true, generatePR: false, publishToDB: false }
// β Creates bugs in GitHub/Jira/Azure DevOps for each failure
`#### Generate Fix + Auto PR
`typescript
{ generateFix: true, createBug: false, generatePR: true, publishToDB: false }
// β Creates fix files + topic branch + draft PR with fixes
`#### Full Stack (All Features)
`typescript
{ generateFix: true, createBug: true, generatePR: true, publishToDB: true, sendEmail: true }
// β AI fixes + bug tracking + PRs + database logging + email notifications
`$3
1. Test Fails β AI analyzes failure and generates fix suggestion
2. Generate Fix (
generateFix: true) β Creates fix files in test-results/fixes/
3. Create Branch (generatePR: true) β Creates topic branch autofix/test-name-{timestamp}
4. Commit Changes β Commits AI fix to topic branch with detailed message
5. Create PR β Opens draft PR from topic branch β base branch with:
- Error details and AI analysis
- Labels: auto-fix, test-failure, ai-generated
- Links to commit and test details
6. Review & Merge β Team reviews draft PR before merging---
π Provider Support
$3
| Provider | Status | Configuration |
| ---------------------- | ------------------- | ------------------------------------------------------- |
| Azure OpenAI | β
Production Ready |
AZURE_OPENAI_ENDPOINT, AZURE_OPENAI_DEPLOYMENT_NAME |
| OpenAI | β
Production Ready | OPENAI_API_KEY, OPENAI_MODEL |
| Anthropic (Claude) | β
Production Ready | ANTHROPIC_API_KEY, ANTHROPIC_MODEL |
| Google AI (Gemini) | β
Production Ready | GOOGLE_AI_API_KEY, GOOGLE_AI_MODEL |
| Mistral AI | β
Production Ready | MISTRAL_API_KEY, MISTRAL_MODEL |$3
| Provider | Status | Configuration |
| ----------------- | ------------------- | ------------------------------------------------------------------ |
| GitHub Issues | β
Production Ready |
GITHUB_TOKEN, GITHUB_OWNER, GITHUB_REPO |
| Azure DevOps | β
Production Ready | AZURE_DEVOPS_ORG_URL, AZURE_DEVOPS_PROJECT, AZURE_DEVOPS_PAT |
| Jira | β
Production Ready | JIRA_HOST, JIRA_EMAIL, JIRA_API_TOKEN, JIRA_PROJECT_KEY |$3
| Provider | Status | Configuration |
| -------------- | ------------------- | -------------------------------------------------------------- |
| SQLite | β
Production Ready |
SQLITE_DB_PATH (optional) |
| MySQL | β
Production Ready | MYSQL_HOST, MYSQL_USER, MYSQL_PASSWORD, MYSQL_DATABASE |
| PostgreSQL | π§ Coming Soon | - |$3
| Provider | Status | Configuration |
| ------------------- | ------------------- | ---------------------------------------------------------------------- |
| Email (SMTP) | β
Production Ready |
EMAIL_HOST, EMAIL_USER, EMAIL_PASSWORD, EMAIL_FROM, EMAIL_TO |
| Slack | π§ Coming Soon | SLACK_WEBHOOK_URL |
| Microsoft Teams | π§ Coming Soon | TEAMS_WEBHOOK_URL |$3
| Provider | Status | Configuration |
| ---------------- | ------------------- | ----------------------------------- |
| GitHub | β
Production Ready | Uses
GITHUB_* configuration |
| Azure DevOps | β
Production Ready | Uses AZURE_DEVOPS_* configuration |---
π» Usage Examples
$3
`typescript
// playwright.config.ts
import {defineConfig} from '@playwright/test';export default defineConfig({
reporter: [
[
'playwright-ai-reporter',
{
generateFix: true,
categorizeFailures: true,
slowTestThreshold: 3,
maxSlowTestsToShow: 5,
},
],
],
});
`$3
`typescript
import {ProviderRegistry} from 'playwright-ai-reporter';// Initialize providers
await ProviderRegistry.initialize({
ai: {type: 'openai'},
bugTracker: {type: 'github'},
database: {type: 'sqlite'},
});
// Get providers
const ai = await ProviderRegistry.getAIProvider();
const bugTracker = await ProviderRegistry.getBugTrackerProvider();
const db = await ProviderRegistry.getDatabaseProvider();
`$3
`typescript
import {AIProviderFactory} from 'playwright-ai-reporter';// Create specific provider
const provider = await AIProviderFactory.createProvider('anthropic');
// Generate completion
const response = await provider.generateCompletion([
{role: 'system', content: 'You are a test engineer.'},
{role: 'user', content: 'Analyze this test failure...'},
]);
console.log(response.content);
`$3
`typescript
import {ReporterWorkflow} from 'playwright-ai-reporter';// Initialize
await ReporterWorkflow.initialize();
// Process test failure
await ReporterWorkflow.processTestFailure(failure, sourceCode);
// Save test run
const runId = await ReporterWorkflow.saveTestRun(summary);
// Send notifications
await ReporterWorkflow.sendNotification(summary, failures);
// Cleanup
await ReporterWorkflow.cleanup();
`For more examples, check the examples folder.
---
π Output Examples
$3
`plaintext
π Starting test run: 3 tests using 2 workers
β
Login test passed in 1.23s
β
API integration test passed in 2.45s
β οΈ Payment test was skippedβ
All 3 tests passed | 1 skipped | β± Total: 3.68s
π₯οΈ Running locally
Additional Metrics:
- Average passed test time: 1.84s
- Slowest test took: 2.45s
- Top 3 slowest tests:
1. API integration test: 2.45s
2. Login test: 1.23s
β οΈ Warning: 1 test was skipped.
Please ensure to test the skipped scenarios manually before deployment.
`$3
`plaintext
π Starting test run: 3 tests using 2 workers
β
Login test passed in 1.23s
β API test failed in 2.45s
π Retry attempt for "API test" (failed) in 2.50s
β οΈ Payment test was skippedβ 1 of 3 tests failed | 1 passed | 1 skipped | β± Total: 6.18s
π€ Generating AI-powered fix suggestions...
Generating fix suggestion for: API test
β
Fix suggestion generated:
- Prompt: ./test-results/prompts/api-test.md
- Fix: ./test-results/fixes/fix-api-test.md
π Generating pull request with fix...
Creating topic branch: autofix/api-test-2025-12-30T10-30-45
β
Branch created successfully
Committing changes to autofix/api-test-2025-12-30T10-30-45
β
Changes committed: a1b2c3d
Creating pull request: autofix/api-test-2025-12-30T10-30-45 β main
β
Pull request created successfully:
PR #42: https://github.com/yourorg/yourrepo/pull/42
Branch: autofix/api-test-2025-12-30T10-30-45 β main
Status: open (draft)
AI fix suggestion generation complete
Additional Metrics:
- Average passed test time: 1.23s
- Slowest test took: 1.23s
Test Failures:
--- Failure #1 ---
Test: API test
Category: NetworkError
Error: Connection timeout
Stack Trace:
at Connection.connect (/src/api/connection.ts:45:7)
`$3
After each test run, a self-contained HTML report is automatically generated:
`plaintext
π Generating self-contained test health report...
β Loaded testSummary.json (25 tests)
β Loaded testFailures.json (2 failures)
β Loaded HTML template
β Loaded CSS
β Loaded JavaScript
β Generated standalone report: E:\project\test-results\test-health-report.htmlββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β π Test Health Report β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ£
β Open in browser: file:///E:/project/test-results/test-health-report.html
β β
β Or run: npx playwright show-report (if using Playwright) β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
`The HTML report includes:
- π Interactive Charts: Test results overview, failure categories, duration analysis
- π Test Details Grid: Searchable/filterable list of all tests with status badges
- π Failed Tests Section: Detailed error messages with AI fix suggestions
- β οΈ Flaky Tests Analysis: Pattern detection and stability recommendations
- π Slowest Tests: Performance analysis and optimization targets
- π Artifact Links: Screenshots, videos, and trace files for failed tests
---
β Frequently Asked Questions (FAQs)
$3
What is Playwright AI Reporter?
Playwright AI Reporter is an enterprise-grade, AI-powered test reporter for Playwright that automatically analyzes test failures, creates bug reports, generates fix suggestions, and can even submit auto-healing pull requests. It uses a flexible provider-based architecture that supports multiple AI services, bug trackers, databases, and notification systems.
Which AI providers are supported?
We support:
- Azure OpenAI (with Managed Identity support)
- OpenAI (GPT-3.5, GPT-4)
- Anthropic Claude (Claude 3 Opus, Sonnet, Haiku)
- Google AI (Gemini Pro, Gemini Pro Vision)
- Mistral AI (Mistral 7B, Mixtral 8x7B)
You can easily switch between providers by changing your environment configuration.
Do I need to use all features?
No! The reporter is modular. You can:
- Use just AI fix suggestions (
generateFix: true)
- Add bug tracking (createBug: true)
- Enable auto-healing PRs (generatePR: true)
- Store results in a database (publishToDB: true)
- Send notifications (sendEmail: true)Mix and match based on your needs. Start simple and add features as needed.
Is this production-ready?
Yes! The reporter is:
- Written in TypeScript with full type safety
- Thoroughly tested in real-world scenarios
- Used in CI/CD pipelines
- Battle-tested with enterprise applications
- Actively maintained and updated
$3
How do I get started quickly?
1. Install:
npm install playwright-ai-reporter --save-dev
2. Copy config: cp examples/env-configs/.env.github-stack .env
3. Add your API keys to .env
4. Update playwright.config.ts to use the reporter
5. Run: npx playwright testCheck the Quick Start section for details.
Which environment file should I use?
Choose based on your stack:
-
.env.github-stack - GitHub Issues + Mistral AI + SQLite (recommended for open source)
- .env.azure-stack - Azure DevOps + Azure OpenAI + MySQL (recommended for enterprise)
- .env.openai-jira - Jira + OpenAI + SQLite (recommended for startups/agile teams)
- .env.anthropic-minimal - Claude AI only (minimal setup)All examples are in
examples/env-configs/.
Do I need to install all peer dependencies?
No! Install only what you need:
-
@azure/identity - Only if using Azure OpenAI with Managed Identity
- @octokit/rest - Only if using GitHub Issues/PRs
- azure-devops-node-api - Only if using Azure DevOps
- mysql2 - Only if using MySQL database
- nodemailer - Only if using email notificationsThe reporter will work with just your AI provider installed.
$3
How accurate are the AI fix suggestions?
The AI analyzes:
- Test code and error messages
- Stack traces and context
- Playwright best practices
- Error patterns (timeout, selector, network, etc.)
While not perfect, the suggestions are typically actionable starting points. Always review AI-generated fixes before applying them. We recommend using
generatePR: true which creates draft PRs for mandatory code review.
Can I customize the AI prompts?
Yes! The prompts are generated in
test-results/prompts/ before being sent to the AI. You can:1. Review generated prompts
2. Modify the prompt generation logic in
src/utils/genaiUtils.ts
3. Create custom templates
4. Add project-specific contextCheck the documentation for advanced customization.
Which AI provider is most cost-effective?
For cost optimization:
- Mistral AI - Most affordable, good quality
- Google Gemini - Low cost, high token limits
- OpenAI GPT-3.5 - Balanced cost/performance
- Azure OpenAI - Best for enterprise with existing Azure credits
- Anthropic Claude - Premium pricing, best quality
Choose based on your budget and quality requirements.
$3
How does auto-healing work?
The auto-healing workflow:
1. Test fails β AI generates fix
2. Reporter creates topic branch:
autofix/test-name-timestamp
3. Commits AI fix to topic branch
4. Creates draft PR from topic branch to base branch
5. Team reviews and merges if fix is correctPRs are always created as drafts to ensure mandatory code review. Enable with
generatePR: true in config.
Are auto-generated PRs safe?
Yes, because:
- PRs are created as drafts requiring review
- Changes are committed to topic branches, not main
- AI suggestions are clearly labeled
- Full test context is included in PR description
- Team has final approval before merging
Never auto-merge AI-generated code without review.
Can I disable auto-healing for specific tests?
Yes! Use test annotations:
`typescript
test(
'critical test',
{
annotation: {type: 'no-auto-heal', description: 'Manual review required'},
},
async ({page}) => {
// Test code
},
);
`The reporter will skip PR generation for annotated tests.
$3
Can I use multiple bug trackers?
Not simultaneously. Choose one bug tracker provider:
- GitHub Issues
- Azure DevOps Work Items
- Jira Tickets
However, you can easily switch between them by changing the
BUG_TRACKER_PROVIDER environment variable.
How do I add a custom provider?
Implement the appropriate interface:
`typescript
import {IAIProvider} from 'playwright-ai-reporter';export class CustomAIProvider implements IAIProvider {
async generateCompletion(messages) {
// Your implementation
}
}
`See docs/PROVIDERS.md for detailed instructions on adding custom providers.
Can I use this in CI/CD?
Absolutely! The reporter:
- Detects CI environment automatically
- Extracts build information (GitHub Actions, Azure Pipelines, Jenkins, etc.)
- Integrates with artifact storage
- Works with pipeline secrets for API keys
- Generates structured JSON output for pipeline steps
Check
examples/tests/ for CI/CD integration examples.$3
What data is stored in the database?
Two tables:
-
test_runs - Test run metadata (timestamp, environment, branch, commit, totals, duration)
- test_results - Individual test results (test_id, status, duration, errors, retries)Indexed for fast queries on timestamp, test_run_id, test_id, and status.
Can I query historical test data?
Yes! Use the database provider:
`typescript
import {ProviderRegistry} from 'playwright-ai-reporter';const db = await ProviderRegistry.getDatabaseProvider();
const results = await db.query('SELECT * FROM test_results WHERE status = ? AND timestamp > ?', ['failed', oneWeekAgo]);
`Perfect for failure trend analysis and flaky test identification.
$3
Why am I not seeing AI fix suggestions?
Check:
1.
generateFix: true in playwright.config.ts
2. AI provider configured in .env
3. Valid API key
4. Network connectivity to AI service
5. Check console output for error messagesRun
npm run validate:config to check your setup.
PRs are not being created
Verify:
1.
generatePR: true and generateFix: true in config
2. PR provider configured (PR_PROVIDER=github)
3. Valid GitHub/Azure DevOps token with repo permissions
4. Git repository initialized
5. No uncommitted changes blocking branch creationCheck logs for specific error messages.
How do I debug configuration issues?
Run the configuration validator:
`bash
npm run validate:config
`This will check:
- Environment variables
- API keys validity
- Provider connectivity
- Configuration completeness
- Permission issues
Fix any reported issues before running tests.
SQLite3 native binding errors in CI
Problem: You see errors like
Could not locate the bindings file for sqlite3 in CI environments.Solution: As of v0.0.2, SQLite dependencies are optional and lazy-loaded. If you don't need database features:
`typescript
{
reporters: [
['playwright-ai-reporter', {
publishToDB: false, // Disable database - no sqlite3 needed!
// ... other options
}]
]
}
`The reporter will work perfectly without sqlite3 installed.
Alternative solutions:
1. Use MySQL instead:
DATABASE_PROVIDER=mysql
2. See SQLITE-FIX.md for detailed troubleshooting$3
Can I use this with TypeScript?
Yes! The reporter is written in TypeScript and provides full type definitions. Import types:
`typescript
import type {IAIProvider, IBugTrackerProvider, IDatabaseProvider} from 'playwright-ai-reporter';
`
How do I contribute?
We welcome contributions! See CONTRIBUTING.md for:
- Code style guidelines
- Testing requirements
- PR process
- Development setup
Or check GitHub Issues for open tasks.
Where can I get help?
- π Documentation
- π Issue Tracker
- π¬ Discussions
- π§ Email: support@playwright-ai-reporter.dev
---
π Documentation
$3
#### Quick Links
- Quick Start Guide - Get started in 5 minutes
- Environment Configuration - Complete setup guide with sample configurations
- Provider Documentation - Detailed provider documentation and usage
- Architecture & Design - System architecture and design decisions
- Implementation Details - Technical implementation overview
- API Reference - Complete API documentation
- Troubleshooting Guide - Common issues and solutions
#### Examples
- Environment Config Examples - Pre-configured .env files for different stacks
- Test Examples - Sample test files demonstrating reporter usage
- Workflow Examples - Code examples for common workflows
> π‘ New here? Start with the Quick Start Guide and ENV_CONFIG_GUIDE
---
π€ Contributing
We welcome contributions! Here's how you can help:
1. Fork the repository
2. Create your feature branch:
git checkout -b feature/amazing-feature
3. Make your changes and commit: git commit -m 'Add amazing feature'
4. Push to your fork: git push origin feature/amazing-feature`Please ensure your PR:
- Follows the existing code style
- Includes appropriate tests
- Updates documentation as needed
- Describes the changes made
See docs/PROVIDERS.md for instructions on adding new provider implementations.
---
β
Provider Independence - Not locked into any single service
β
Enterprise Ready - Azure integration, managed identity, MySQL support
β
Cost Optimized - Choose the most cost-effective AI provider
β
Flexible - Use only the features you need
β
Extensible - Easy to add new providers
β
Type Safe - Full TypeScript support
β
Production Tested - Battle-tested in real-world scenarios
β
Well Documented - Comprehensive docs and examples
β
Active Development - Regular updates and improvements
β
Open Source - MIT licensed, community-driven
---
- Automatic bug creation in Azure DevOps for test failures
- Store historical test data in MySQL for trend analysis
- Email notifications to QA team
- Azure OpenAI for fix suggestions with enterprise security
- GitHub Issues for bug tracking
- SQLite for lightweight data storage
- GitHub PRs for automated fixes
- OpenAI or Anthropic for AI suggestions
- Slack notifications for instant alerts
- Jira integration for sprint planning
- Google AI (Gemini) for cost-effective analysis
- Quick iteration with auto-healing
---
This project is licensed under the MIT License - see the LICENSE file for details.
---
- Built with β€οΈ for the Playwright community
- Inspired by the need for better test reporting and automatic debugging in CI/CD pipelines
- Multi-provider AI support: Azure OpenAI, OpenAI, Anthropic, Google AI, Mistral AI
- Thanks to all contributors who help make this reporter better
---
- π Documentation
- π Issue Tracker
- π¬ Discussions
- π§ Email Support
- π¦ Twitter
---
Made with β€οΈ by Deepak Kamboj for the Playwright community
β Star us on GitHub if you find this useful!