AI-powered unit test generator with AST analysis for TypeScript/JavaScript projects
npm install codeguard-testgenAI-powered code review and unit test generator with AST analysis for TypeScript/JavaScript projects. Automatically reviews code quality and generates comprehensive Jest tests using Claude, OpenAI, or Gemini.
> โก NEW: Unified Auto Mode - Automatically review code quality AND generate tests for staged functions!
> ``bash`
> testgen auto # Reviews code + generates tests
> testgen review # Only code review
> testgen test # Only test generation
> testgen doc # Generate API documentation
>
directory for reference$3
- ๐ค AI-Powered: Uses Claude, OpenAI GPT, or Google Gemini to generate intelligent tests
- ๐ AST Analysis: Deep code analysis using Babel parser for accurate test generation
- ๐ฆ Codebase Indexing: Optional caching for 100x faster analysis on large projects
- ๐ฏ Multiple Modes: File-wise, folder-wise, function-wise, or auto test generation
- โ
Smart Validation: Detects incomplete tests, missing assertions, and legitimate failures
- ๐ Iterative Fixing: Automatically fixes import errors, missing mocks, and test issues
- ๐ TypeScript Support: Full support for TypeScript types, interfaces, and decorators$3
- โก Auto Mode: Reviews code quality + generates tests for changed functions
- ๐ Git Integration: Detects changes via git diff (staged and unstaged)
- ๐ CI/CD Ready: Non-interactive modes perfect for automation
- ๐ Documentation Mode: AI-powered OpenAPI/Swagger documentation generationInstallation
$3
`bash
npm install -g codeguard-testgen
`$3
`bash
npm install --save-dev codeguard-testgen
`Configuration
Create a
codeguard.json file in your project root:`json
{
"aiProvider": "claude",
"apiKeys": {
"claude": "sk-ant-api03-...",
"openai": "sk-...",
"gemini": "..."
},
"models": {
"claude": "claude-sonnet-4-20250514",
"openai": "gpt-4o-mini",
"gemini": "gemini-2.0-flash-exp"
},
"testEnv": "vitest/jest",
"testDir": "src/tests",
"excludeDirs": ["node_modules", "dist", "build", ".git", "coverage"],
"validationInterval": 5,
"reviewExecutionMode": "parallel",
"reviewSteps": [
{
"id": "code-quality",
"name": "Code Quality",
"category": "quality",
"type": "ai",
"enabled": true,
"ruleset": "code-quality.md"
},
{
"id": "security",
"name": "Security",
"category": "security",
"type": "ai",
"enabled": true,
"ruleset": "security.md"
}
]
}
`$3
| Option | Required | Description |
|--------|----------|-------------|
|
aiProvider | Yes | AI provider to use: claude, openai, or gemini |
| apiKeys | Yes | API keys for the AI providers |
| models | No | Custom model names (uses defaults if not specified) |
| testDir | No | Directory for test files (default: src/tests) |
| extensions | No | File extensions to process (default: .ts, .tsx, .js, .jsx) |
| excludeDirs | No | Directories to exclude from scanning |
| validationInterval | No | Validation frequency in function-wise mode: undefined = no validation, 0 = only at end, N = validate every N functions |
| docsDir | No | Directory for generated documentation (default: docs) |
| docFormat | No | Documentation format: json or yaml (default: json) |
| docTitle | No | API documentation title (default: from package.json name) |
| docVersion | No | API version (default: from package.json version) |
| includeGenericFunctions | No | Include non-API functions in documentation (default: true) |
| repoDoc | No | Document entire repository (true) or only staged changes (false, default) |
| reviewSteps | No | Array of review steps with custom rulesets (see below) |
| reviewExecutionMode | No | How to execute review steps: parallel or sequential (default: parallel) |$3
Configure custom review steps with rulesets defined in markdown files. Each ruleset is stored in the
codeguard-ruleset/ folder at your project root.`json
{
"reviewExecutionMode": "parallel",
"reviewSteps": [
{
"id": "code-quality",
"name": "Code Quality",
"category": "quality",
"type": "ai",
"enabled": true,
"ruleset": "code-quality.md"
},
{
"id": "security",
"name": "Security Vulnerabilities",
"category": "security",
"type": "ai",
"enabled": true,
"ruleset": "security.md"
}
]
}
`Review Step Options:
| Option | Required | Description |
|--------|----------|-------------|
|
id | Yes | Unique identifier for the step |
| name | Yes | Display name for the review step |
| category | Yes | Category of the review (e.g., quality, security, performance) |
| type | Yes | Type of review (currently only ai supported) |
| enabled | Yes | Whether this step is active (true or false) |
| ruleset | Yes | Filename of markdown ruleset in codeguard-ruleset/ folder |Ruleset Files:
Rulesets are markdown files stored in
codeguard-ruleset/ at your project root:`
your-project/
โโโ codeguard.json
โโโ codeguard-ruleset/
โ โโโ code-quality.md
โ โโโ security.md
โ โโโ performance.md
โ โโโ custom-rules.md
โโโ src/
`Each ruleset file can contain:
- Detailed review criteria
- Specific rules and guidelines
- Examples and code snippets
- Severity guidelines
- OWASP references (for security)
- Best practices documentation
Example Ruleset File (
codeguard-ruleset/code-quality.md):`markdown
Code Quality Review Ruleset
Review Criteria
$3
- Functions: Use clear, descriptive names
- Variables: Use meaningful names
- Boolean variables: Prefix with is, has, should$3
- Functions should be concise (< 50 lines)
- Cyclomatic complexity should be low (< 10)
- Avoid deeply nested conditionals...
`Execution Modes:
-
parallel (default): All enabled review steps run simultaneously for faster completion
- sequential: Steps run one after another in the order definedDefault Review Steps:
If you don't specify
reviewSteps in your config, CodeGuard uses these default steps:
- โ
Code Quality (code-quality.md) - Naming, complexity, readability, best practices
- โ
Potential Bugs (potential-bugs.md) - Logic errors, edge cases, type issues, async problems
- โ
Performance (performance.md) - Algorithm efficiency, unnecessary computations, memory leaks
- โ
Security (security.md) - Input validation, injection risks, OWASP vulnerabilitiesIncluded Ruleset Files:
CodeGuard comes with comprehensive default rulesets in
codeguard-ruleset/:
- code-quality.md - 8 categories including naming, complexity, patterns, error handling
- potential-bugs.md - 8 categories covering logic errors, edge cases, async issues
- performance.md - 8 categories for algorithms, caching, data structures, optimizations
- security.md - OWASP Top 10 coverage with specific checks and referencesYou can customize these files or create your own rulesets for project-specific requirements.
Output Format:
Reviews are organized by step in the final markdown file:
`markdown
Code Review
Summary
[Overall assessment]Files Changed
[List of files]Code Quality
[Findings from code quality step]Security Vulnerabilities
[Findings from security step]Performance Issues
[Findings from performance step]Conclusion
[Final assessment]
`See
codeguard.example.json for a complete configuration example with additional review steps like Accessibility and Documentation Quality.$3
The
validationInterval option controls when the full test suite is validated during function-wise test generation:-
undefined (default): No periodic validation - fastest, tests each function independently
- 0: Validate only at the end after all functions are processed
- N (number): Validate every N functions to catch integration issues earlyExample use cases:
`json
{
"validationInterval": undefined // Fast - no validation checkpoints
}
``json
{
"validationInterval": 5 // Validate after every 5 functions
}
``json
{
"validationInterval": 0 // Validate only once at the end
}
`Recommendation: Use
5 or 10 for large files with many functions to catch integration issues early. Use undefined for fastest processing.$3
- Claude (Anthropic): https://console.anthropic.com/
- OpenAI: https://platform.openai.com/api-keys
- Gemini (Google): https://makersuite.google.com/app/apikey
Quick Reference
| Command | Description | Use Case |
|---------|-------------|----------|
|
testgen auto | Review code quality + generate tests | Complete workflow, CI/CD |
| testgen review | Only review code changes | Code review, quality checks |
| testgen test | Only generate tests for changes | Testing workflow |
| testgen | Interactive mode - choose mode manually | Exploratory testing |
| Mode 1: File-wise | Generate tests for entire file | New files, comprehensive coverage |
| Mode 2: Folder-wise | Generate tests for all files in folder | Batch processing |
| Mode 3: Function-wise | Generate tests for specific functions | Incremental testing |Usage
$3
Automatically review code quality and generate tests for changed functions:
`bash
testgen auto
`What it does:
1. Reviews changed code for quality, bugs, performance, and security issues
2. Generates comprehensive tests for modified functions
3. Saves review to
reviews/{filename}.review.md
4. Creates or updates test filesExample output:
`
๐ Scanning git changes for review...
๐ Found changes in 1 file(s) to review๐ Reviewing: src/services/user.service.ts
๐ฆ Changed functions: createUser, updateUser
โ
Review completed
๐ Reviews saved to: reviews/ directory
============================================================
๐ Scanning git changes for testing...
๐ Found changes in 1 file(s)
๐ Processing: src/services/user.service.ts
๐ฆ Changed functions: createUser, updateUser
โ
Tests generated successfully
`$3
Get AI code review without generating tests:
`bash
testgen review
`What gets reviewed:
- ๐ฏ Code Quality: Naming, complexity, readability, best practices
- ๐ Potential Bugs: Logic errors, edge cases, type mismatches, async issues
- โก Performance: Inefficient algorithms, memory leaks, unnecessary computations
- ๐ Security: Input validation, injection risks, authentication issues
Review output (
reviews/{filename}.review.md):
`markdown
Code Review: user.service.ts
Summary
Overall code quality is good with some areas for improvement...Findings
$3
#### [Security] Missing Input Validation
Function: createUser
Issue: Email parameter not validated before database insertion...
Recommended Fix:
...typescript
if (!email || !email.includes('@')) {
throw new Error('Invalid email');
}
...$3
#### [Performance] Inefficient Loop
...โ
Positive Aspects
- Well-structured error handling
- Clear function naming๐ก General Recommendations
1. Add input validation for all public functions
2. Consider adding JSDoc comments
`$3
Generate tests without code review:
`bash
testgen test
`How it works:
1. Reads both
git diff --staged and git diff to find all changes
2. Identifies which files have been modified
3. Uses AI to detect which exported functions have changes
4. Automatically generates or updates tests for those functions
5. No user interaction required - perfect for automation!Example workflows:
Complete workflow (review + test):
`bash
Make changes to your code
vim src/services/user.service.tsStage your changes
git add src/services/user.service.tsReview code quality and generate tests
testgen auto
`Review only:
`bash
Get code review for staged changes
testgen reviewCheck the review
cat reviews/user.service.review.md
`Test generation only:
`bash
Generate tests without review
testgen test
`Documentation generation:
`bash
Generate API documentation
testgen doc
`Output:
`
๐งช AI-Powered Unit Test Generator with AST Analysis๐ค Auto Mode: Detecting changes via git diff
โ
Using OPENAI (gpt-4o-mini) with AST-powered analysis
๐ Scanning git changes...
๐ Found changes in 2 file(s)
๐ Processing: src/services/user.service.ts
๐ฆ Changed functions: createUser, updateUser
โ
Tests generated successfully
============================================================
๐ Auto-Generation Summary
============================================================
โ
Successfully processed: 1 file(s)
๐ Functions tested: 2
============================================================
`Benefits:
- ๐ Quality Assurance: Catch issues before they reach production
- โก Fast: Only processes changed files
- ๐ฏ Targeted: Reviews and tests only modified functions
- ๐ CI/CD Ready: Non-interactive, perfect for automation
- ๐ก๏ธ Safe: Preserves existing tests for unchanged functions
- ๐ Trackable: All reviews saved for historical reference
What files are processed:
- โ
Source files with supported extensions (
.ts, .tsx, .js, .jsx)
- โ
Files with exported functions
- โ Test files (.test., .spec., __tests__/, /tests/)
- โ Files in node_modules, dist, build, etc.
- โ Non-source files (configs, markdown, etc.)$3
Simply run the command and follow the prompts:
`bash
testgen
`or
`bash
codeguard
`You'll be guided through:
1. Selecting test generation mode (file/folder/function-wise)
2. Choosing files or functions to test
3. Optional codebase indexing for faster processing
$3
#### 1. File-wise Mode
Generate tests for a single file:
- Select from a list of source files
- Generates comprehensive tests for all exported functions
- Creates test file with proper structure and mocks
#### 2. Folder-wise Mode
Generate tests for all files in a directory:
- Select a folder from your project
- Processes all matching files recursively
- Batch generates tests with progress tracking
#### 3. Function-wise Mode
Generate tests for specific functions:
- Select a file
- Choose which functions to test
- Preserves existing tests for other functions
- Ideal for incremental test development
#### 4. Auto Mode (Unified)
Review code quality and generate tests automatically:
- Analyzes git diff (staged and unstaged changes)
- AI reviews code for quality, bugs, performance, security
- Generates comprehensive review markdown files
- Creates tests for changed exported functions
- Non-interactive - perfect for CI/CD pipelines
- Use:
testgen auto#### 5. Review Mode
AI-powered code review only:
- Comprehensive analysis by senior-level AI reviewer
- Reviews code quality, potential bugs, performance issues, security vulnerabilities
- Uses AST tools to understand full context
- Generates structured markdown reports
- Use:
testgen review#### 6. Test Mode
Test generation only:
- Generates tests for changed functions
- Skips code review process
- Faster when you only need tests
- Use:
testgen test#### 7. Documentation Mode
AI-powered API documentation generation:
- Default: Documents only staged/changed functions (like review/test modes)
- Full Repo: Set
"repoDoc": true to document entire codebase
- Analyzes codebase using AST tools
- Auto-detects API endpoints (Express, NestJS, Fastify, Koa)
- Generates comprehensive OpenAPI 3.0 specification
- Documents both API routes and generic functions
- Smart merge with existing documentation
- Supports JSON and YAML formats
- Use: testgen docTwo modes:
1. Changed Files Only (Default) -
"repoDoc": false or omitted
- Works like testgen review and testgen test
- Only documents staged/changed functions
- Fast and targeted
- Perfect for incremental updates
- Requires git repository2. Full Repository -
"repoDoc": true
- Documents entire codebase
- Comprehensive documentation generation
- Useful for initial documentation or major updates
- No git requirementWhat it documents:
- โ
API Endpoints: All REST API routes with methods, paths, parameters
- โ
Request/Response Schemas: Inferred from TypeScript types
- โ
Authentication: Detects and documents auth requirements
- โ
Error Responses: Documents error cases and status codes
- โ
Generic Functions: Optional documentation for utility functions
- โ
Usage Examples: AI-generated examples for each endpoint
Supported Frameworks:
- Express:
app.get(), router.post(), route methods
- NestJS: @Controller(), @Get(), @Post(), decorators
- Fastify: fastify.route(), route configurations
- Koa: router.get(), middleware patternsExample usage:
`bash
Document only changed/staged functions (default)
testgen docOutput:
๐ Documentation Mode: Generating API documentation
#
๐ Scanning git changes for documentation...
#
๐ Found changes in 2 file(s)
#
๐ค Generating OpenAPI specification...
#
โ
Documentation generated successfully
#
============================================================
๐ Documentation Summary
============================================================
โ
API Endpoints documented: 5
โ
Generic functions documented: 8
๐ Output: docs/openapi.json
============================================================
For full repository documentation, set in codeguard.json:
{
"repoDoc": true
}
`Generated OpenAPI spec:
`json
{
"openapi": "3.0.0",
"info": {
"title": "My API",
"version": "1.0.0"
},
"paths": {
"/users": {
"get": {
"summary": "Get all users",
"responses": {
"200": {
"description": "Success",
"content": {
"application/json": {
"schema": {
"type": "array",
"items": { "$ref": "#/components/schemas/User" }
}
}
}
}
}
}
}
},
"components": {
"schemas": {
"User": {
"type": "object",
"properties": {
"id": { "type": "string" },
"name": { "type": "string" },
"email": { "type": "string" }
}
}
}
}
}
`Smart merging:
When existing documentation is found, CodeGuard intelligently merges:
- โ
Preserves manually edited descriptions and summaries
- โ
Updates schemas with latest types from code
- โ
Adds new endpoints without removing manual changes
- โ
Maintains custom examples and documentation
- โ
Tracks generation metadata and timestamps
How It Works
$3
1. Git Diff Analysis: Detects changed files and functions
2. AST Analysis: Deep parse of code structure using Babel
3. Context Understanding: AI uses tools to analyze:
- Function implementations
- Dependencies and imports
- Type definitions
- Related code context
4. Multi-Aspect Review: Analyzes for:
- Code quality and best practices
- Potential bugs and edge cases
- Performance bottlenecks
- Security vulnerabilities
5. Structured Report: Generates markdown with:
- Severity-based findings
- Code snippets and fixes
- Positive observations
- Actionable recommendations$3
1. AST Analysis: Parses your code using Babel to understand structure
2. Dependency Resolution: Analyzes imports and calculates correct paths
3. AI Generation: Uses AI to generate comprehensive test cases
4. Validation: Checks for completeness, assertions, and coverage
5. Execution: Runs tests with Jest to verify correctness
6. Iterative Fixing: Automatically fixes common issues like:
- Import path errors
- Missing mocks
- Database initialization errors
- Type mismatches
7. Failure Detection: Distinguishes between test bugs and source code bugs$3
1. File Scanning: Recursively scans all source files in the project
2. AST Analysis: Parses each file using Babel to understand structure
3. Endpoint Detection: AI identifies API routes across different frameworks:
- Express: app.METHOD(), router.METHOD()
- NestJS: @Controller(), @Get(), @Post(), etc.
- Fastify: fastify.route(), route configurations
- Koa: router.METHOD(), middleware chains
4. Schema Inference: Extracts TypeScript types for request/response schemas
5. AI Enhancement: AI generates:
- Meaningful descriptions for each endpoint
- Parameter documentation
- Response examples
- Error scenarios
6. OpenAPI Generation: Builds complete OpenAPI 3.0 specification
7. Smart Merge: Intelligently merges with existing documentation
8. File Output: Writes to docs/openapi.json or .yamlGenerated Test Features
The AI generates tests with:
- โ
Proper imports and type definitions
- โ
Jest mocks for dependencies
- โ
Multiple test cases per function:
- Happy path scenarios
- Edge cases (null, undefined, empty arrays)
- Error conditions
- Async behavior testing
- โ
Clear, descriptive test names
- โ
Complete implementations (no placeholder comments)
- โ
Real assertions with expect() statements
Advanced Features
$3
CodeGuard modes are designed for continuous integration workflows:
GitHub Actions - Complete Workflow (Review + Tests):
`yaml
name: AI Code Review & Test Generationon:
pull_request:
branches: [ main, develop ]
jobs:
review-and-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0 # Fetch full history for git diff
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
- name: Install dependencies
run: npm install
- name: Install CodeGuard
run: npm install -g codeguard-testgen
- name: Review code and generate tests
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
run: testgen auto
- name: Upload review reports
uses: actions/upload-artifact@v3
with:
name: code-reviews
path: reviews/
- name: Commit generated tests and reviews
run: |
git config --local user.email "action@github.com"
git config --local user.name "GitHub Action"
git add src/tests/ reviews/
git commit -m "๐ค AI code review + tests for changed functions" || echo "No changes"
git push
`GitHub Actions - Review Only:
`yaml
name: AI Code Reviewon:
pull_request:
branches: [ main ]
jobs:
code-review:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
- name: Install CodeGuard
run: npm install -g codeguard-testgen
- name: AI Code Review
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
run: testgen review
- name: Comment PR with review
uses: actions/github-script@v6
with:
script: |
const fs = require('fs');
const reviews = fs.readdirSync('reviews/');
for (const review of reviews) {
const content = fs.readFileSync(
reviews/${review}, 'utf8');
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: ## AI Code Review: ${review}\n\n${content}
});
}
`GitLab CI Example:
`yaml
review-and-test:
stage: quality
script:
- npm install -g codeguard-testgen
- testgen auto # Review + tests
artifacts:
paths:
- reviews/
- src/tests/
only:
- merge_requestsreview-only:
stage: quality
script:
- npm install -g codeguard-testgen
- testgen review
artifacts:
reports:
codequality: reviews/
only:
- merge_requests
`Pre-commit Hook:
`bash
#!/bin/bash
.git/hooks/pre-commit
Review code and generate tests for staged changes
testgen autoAdd generated tests and reviews to commit
git add src/tests/ reviews/
`Pre-push Hook (Review Only):
`bash
#!/bin/bash
.git/hooks/pre-push
Quick code review before pushing
testgen reviewShow review summary
echo "๐ Code Review Complete - Check reviews/ directory"
`$3
On first run, you'll be prompted to enable codebase indexing:
`
Enable codebase indexing? (y/n)
`Benefits:
- 100x+ faster analysis on subsequent runs
- Instant dependency lookups
- Cached AST parsing
- Automatic update detection
The index is stored in
.codeguard-cache/ and automatically updates when files change.$3
The tool distinguishes between:
Fixable Test Issues (automatically fixed):
- Wrong import paths
- Missing mocks
- Incorrect assertions
- TypeScript errors
Legitimate Source Code Bugs (reported, not fixed):
- Function returns wrong type
- Missing null checks
- Logic errors
- Unhandled edge cases
When legitimate bugs are found, they're reported with details for you to fix in the source code.
Examples
$3
Step 1: Make changes to a function
`typescript
// src/services/user.service.ts
export const createUser = async (name: string, email: string) => {
// Added email validation
if (!email.includes('@')) {
throw new Error('Invalid email');
}
return await db.users.create({ name, email });
};export const deleteUser = async (id: string) => {
return await db.users.delete(id);
};
`Step 2: Stage changes and run auto mode
`bash
git add src/services/user.service.ts
testgen auto
`Output:
`
๐ Scanning git changes for review...
๐ Found changes in 1 file(s)๐ Reviewing: src/services/user.service.ts
๐ฆ Changed functions: createUser
โ
Review completed
============================================================
๐ Scanning git changes for testing...
๐ Found changes in 1 file(s)
๐ Processing: src/services/user.service.ts
๐ฆ Changed functions: createUser
โ
Tests generated successfully
`Results:
-
reviews/user.service.review.md created with code quality analysis
- Only createUser gets new tests, deleteUser tests remain unchanged!Review excerpt:
`markdown
$3
#### [Code Quality] Weak Email Validation
Function:
createUser
Issue: Email validation only checks for '@' symbol, not comprehensiveRecommended Fix:
`typescript
const emailRegex = /^[^\s@]+@[^\s@]+\.[^\s@]+$/;
if (!emailRegex.test(email)) {
throw new Error('Invalid email format');
}
`
`$3
`typescript
// src/services/user.service.ts
export class UserService {
async getUser(id: string): Promise {
return await this.db.findUser(id);
}
}
`Generated test:
`typescript
// src/tests/services/user.service.test.ts
import { UserService } from '../../services/user.service';jest.mock('../../database');
describe('UserService', () => {
describe('getUser', () => {
test('should return user when id exists', async () => {
const mockUser = { id: '123', name: 'John' };
const service = new UserService();
service.db.findUser = jest.fn().mockResolvedValue(mockUser);
const result = await service.getUser('123');
expect(result).toEqual(mockUser);
expect(service.db.findUser).toHaveBeenCalledWith('123');
});
test('should handle null id', async () => {
const service = new UserService();
await expect(service.getUser(null)).rejects.toThrow();
});
});
});
`Troubleshooting
$3
#### "Not a git repository"
The
auto, test, and review commands require git to detect changes. Initialize git in your project:
`bash
git init
`#### "No changes detected in source files"
This means:
- No staged or unstaged changes exist
- Only test files were modified (test files are excluded)
- Changes are in non-source files
Check your changes:
`bash
git status
git diff
`#### Review/Test mode not working
Make sure you're using the correct command:
`bash
testgen auto # Review + tests
testgen review # Only review
testgen test # Only tests
`#### "No exported functions changed"
Possible causes:
1. AI model misconfigured: Check your
codeguard.json has a valid model name:
`json
{
"models": {
"openai": "gpt-4o-mini" // โ
Correct
// NOT "gpt-5-mini" โ
}
}
`
2. Only internal functions changed: Auto mode only generates tests for exported functions
3. File has no exported functions: Make sure functions are exported:
`typescript
export const myFunction = () => { } // โ
Will be tested
const internalFunc = () => { } // โ Will be skipped
`#### Debugging Auto Mode
Enable detailed logging by checking the console output:
`bash
testgen auto
`Look for:
-
๐ฆ Found X exported function(s): ... - Shows detected functions
- ๐ค AI response: ... - Shows what AI detected
- ๐ AST Analysis result: ... - Shows file parsing results$3
Create a
codeguard.json file in your project root. See Configuration section above.$3
Ensure your
codeguard.json has the correct API key for your selected provider:`json
{
"aiProvider": "claude",
"apiKeys": {
"claude": "sk-ant-..."
}
}
`$3
The tool automatically detects and fixes import path errors. If issues persist:
1. Check that all dependencies are installed
2. Verify your project structure matches expected paths
3. Ensure TypeScript is configured correctly
$3
Install Babel dependencies:
`bash
npm install --save-dev @babel/parser @babel/traverse
`Programmatic Usage
You can also use CodeGuard as a library:
`typescript
import { generateTests, analyzeFileAST } from 'codeguard-testgen';// Generate tests for a file
await generateTests('src/services/user.service.ts');
// Analyze a file's AST
const analysis = analyzeFileAST('src/utils/helpers.ts');
console.log(analysis.functions);
`Project Structure
After installation, your project will have:
`
your-project/
โโโ codeguard.json # Configuration file
โโโ src/
โ โโโ services/
โ โ โโโ user.service.ts
โ โโโ tests/ # Generated tests
โ โโโ services/
โ โโโ user.service.test.ts
โโโ reviews/ # AI code reviews
โ โโโ user.service.review.md
โโโ .codeguard-cache/ # Optional index cache
``- Node.js >= 16.0.0
- Jest (for running generated tests)
- TypeScript (for TypeScript projects)
MIT
Contributions welcome! Please open an issue or submit a pull request.
For issues, questions, or feature requests, please open an issue on GitHub.