AI-powered Cypress test generator
npm install cygenbash
Install globally
npm install -g cygen
Verify installation
cygen --help
`
$3
`bash
Install in your project
npm install cygen
Run using npx
npx cygen --help
`
Usage
$3
CyGen provides two main commands:
1. Watch Mode - Continuously watch for file changes:
`bash
Basic usage with Ollama
cygen watch --use-ai
With OpenAI
cygen watch --use-ai --llm openai --model gpt-4 --api-key YOUR_OPENAI_KEY
With web search enabled
cygen watch --use-ai --web-search --llm ollama --model llama3.1
`
2. Test Mode - Generate and optionally run tests for specific files:
`bash
Generate tests with OpenAI
cygen test --files ./src/api/users.js --use-ai --llm openai --model gpt-3.5-turbo --api-key YOUR_OPENAI_KEY
With custom Ollama server
cygen test --files ./src/api/*.js --use-ai --llm ollama --model mistral --base-url http://localhost:11434
Generate and run tests
cygen test --files ./src/api/*.js --use-ai --web-search
`
$3
#### Watch Command Options
| Option | Alias | Description | Default |
|--------|-------|-------------|---------|
| --watch-dir | -w | Directory to watch for file changes | Current directory |
| --output-dir | -o | Directory to output generated tests | cypress/integration/generated |
| --use-ai | | Enable AI-powered test generation | false |
| --web-search | | Enable web search for test generation | false |
| --llm | | LLM provider (ollama or openai) | ollama |
| --model | | Model name for the selected LLM | llama3.1 |
| --api-key | | API key for the LLM provider | |
| --base-url | | Base URL for the LLM service | http://localhost:11434 |
#### Test Command Options
| Option | Alias | Description | Default |
|--------|-------|-------------|---------|
| --files | -f | Files to generate tests for | (required) |
| --output-dir | -o | Directory to output generated tests | cypress/integration/generated |
| --use-ai | | Enable AI-powered test generation | false |
| --web-search | | Enable web search for test generation | false |
| --llm | | LLM provider (ollama or openai) | ollama |
| --model | | Model name for the selected LLM | llama3.1 |
| --api-key | | API key for the LLM provider | |
| --base-url | | Base URL for the LLM service | http://localhost:11434 |
| --run | | Run the generated tests with Cypress | false |
$3
You can also use CyGen programmatically in your code:
`javascript
const { CyGen } = require('cygen');
// Using OpenAI
const cygen = new CyGen({
useAI: true,
aiOptions: {
llm: 'openai',
model: 'gpt-4',
apiKey: 'your-openai-key',
enableWebSearch: true
}
});
// Using Ollama
const cygen = new CyGen({
useAI: true,
aiOptions: {
llm: 'ollama',
model: 'mistral',
baseUrl: 'http://localhost:11434'
}
});
// Generate tests
await cygen.generateTestsForFile('./src/api/users.js');
`
Configuration
The following options are available:
- watchDir: Directory to watch for changes (default: current working directory)
- outputDir: Directory where test files will be generated (default: './cypress/integration/generated')
- useAI: Enable AI-powered test generation (default: false)
- aiOptions: AI configuration options
- llm: LLM provider (ollama or openai)
- model: Model name for the selected LLM
- apiKey: API key for the LLM provider
- baseUrl: Base URL for the LLM service
- enableWebSearch: Enable web search for test generation
Supported File Types
- JavaScript/TypeScript API files
- Swagger/OpenAPI specification files (JSON/YAML)
Generated Tests
Tests are generated with comprehensive coverage including:
1. Happy Path Tests
- Successful API calls
- Valid request/response handling
- Expected data validation
2. Negative Tests
- Invalid input handling
- Error response validation
- Authentication/Authorization failures
- Rate limiting scenarios
3. Edge Cases
- Boundary value testing
- Empty/null input handling
- Large payload handling
- Timeout scenarios
Example generated test:
`javascript
describe('GET /api/users', () => {
beforeEach(() => {
cy.intercept('GET', '/api/users').as('apiRequest');
});
it('should return a list of users', () => {
cy.request({
method: 'GET',
url: '/api/users',
failOnStatusCode: false
}).then((response) => {
expect(response.status).to.equal(200);
expect(response.body).to.be.an('array');
expect(response.body[0]).to.have.property('id');
expect(response.body[0]).to.have.property('name');
});
});
it('should handle invalid requests', () => {
cy.request({
method: 'GET',
url: '/api/users/invalid',
failOnStatusCode: false
}).then((response) => {
expect(response.status).to.equal(404);
expect(response.body).to.have.property('error');
});
});
});
`
Supported LLMs
$3
- Default models: llama3.1, mistral, llama2
- Requires local Ollama server (default: http://localhost:11434)
$3
- Models: gpt-4, gpt-3.5-turbo`