AI-powered load testing CLI tool using natural language commands. Convert plain English test descriptions into K6 load test scripts.
npm install stressmasterhttps://github.com/user-attachments/assets/5eb99932-492f-4d38-9e59-efce48db43b6
A local-first AI-powered load testing tool that accepts natural language commands to perform API load testing. The system uses AI models to parse user prompts and convert them into structured load test specifications that can be executed using K6.
StressMaster supports multiple AI providers for natural language parsing:
- Claude - Claude 3 models via direct API or OpenRouter
- OpenRouter - Access to multiple AI models through OpenRouter
- OpenAI - GPT-3.5, GPT-4, and other OpenAI models
- Google Gemini - Gemini Pro and other Google AI models
- Natural Language Interface: Describe load tests in plain English
- Multiple Test Types: Spike, stress, endurance, volume, and baseline testing
- K6 Integration: Generates and executes K6 scripts automatically
- Real-time Monitoring: Live progress tracking and metrics
- Comprehensive Reporting: Detailed analysis with AI-powered recommendations
- Export Formats: JSON, CSV, and HTML export capabilities
- Cloud AI Integration: Supports multiple cloud AI providers (Claude, OpenAI, Gemini, OpenRouter)
- No Local AI Required: Uses cloud-based AI models for natural language parsing
```
βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββ
β User Input βββββΆβ AI Parser βββββΆβ K6 Generator β
β (Natural Lang.) β β (AI Model) β β β
βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββ
β
βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββ
β Results & ββββββ Test Executor ββββββ Load Test β
β Recommendations β β (K6) β β Orchestrator β
βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββ
- Node.js: Version 18.0 or higher
- npm: Version 9.0 or higher
- K6: Installed and available in PATH (for load test execution)
- Internet Access: Required for AI provider API calls
#### Option A: NPM Global Installation (Recommended)
`bashInstall from npm
npm install -g stressmaster
#### Option B: Development Installation (from source)
`bash
Clone the repository
git clone https://github.com/mumzworld-tech/StressMaster.git
cd StressMasterInstall dependencies
npm installBuild the project
npm run buildInstall globally
npm install -g .
`#### Option C: Testing Locally in Another Project (Development)
To test StressMaster locally in another project before publishing:
`bash
In StressMaster directory
npm install
npm run build
npm linkIn your test project directory
npm link stressmasterNow use StressMaster in your project
stressmaster --version
`π See LOCAL_TESTING_GUIDE.md for detailed instructions on:
- Testing StressMaster locally using
npm link
- Using npm pack for production-like testing
- Verifying file resolution in other projects
- Configuration when testing locally$3
After installation, you can immediately start using StressMaster:
`bash
Interactive mode
stressmasterDirect command
stressmaster "send 10 GET requests to https://httpbin.org/get"Spike test
stressmaster "spike test with 50 requests in 30 seconds to https://api.example.com"Export results
stressmaster export html
`Try your first load test:
`bash
stressmaster "Send 100 GET requests to https://httpbin.org/get over 30 seconds"
`$3
β
Yes! StressMaster fully supports testing localhost APIs. You can test your local backend applications directly:
`bash
Test your local API
stressmaster "send 100 POST requests to http://localhost:3000/api/v1/users"Test with different ports
stressmaster "spike test with 50 requests to http://localhost:8080/api/products"Test with headers and payload
stressmaster "send 10 POST requests to http://localhost:5000/api/orders with header Authorization Bearer token123 and JSON body @payload.json"
`Key Points:
- β
Works with
http://localhost or http://127.0.0.1
- β
Supports any port (e.g., :3000, :8080, :5000)
- β
Works with local API development servers
- β
No special configuration needed - just use the localhost URLExample: Testing Your Local Backend
`bash
Start your local API server (e.g., Express, FastAPI, etc.)
Then run StressMaster:
stressmaster "send 50 GET requests to http://localhost:3000/api/v1/users"
stressmaster "POST 20 requests to http://localhost:3000/api/v1/orders with JSON body @order-data.json increment orderId"
`$3
#### Quick Setup (Recommended) π
After installation, run the interactive setup wizard to configure everything automatically:
`bash
stressmaster setup
`This wizard will:
- β
Guide you through choosing your AI provider (Ollama, OpenAI, Claude, Gemini)
- β
Prompt for API keys and configuration
- β
Create
config/ai-config.json file automatically
- β
Optionally create a .env file for environment variables
- β
Show you next stepsThat's it! The setup wizard handles all the configuration for you.
#### Manual Configuration (Advanced)
Important: When StressMaster is installed as an npm package, all configuration is stored in your project directory (where you run the command), not in StressMaster's installation directory.
StressMaster loads configuration in this priority order:
1. Environment Variables (highest priority)
2. Config File (
.stressmaster/config/ai-config.json in your project)
3. package.json (in a stressmaster section)
4. Defaults (lowest priority)#### Method 1: Environment Variables (Recommended)
Create a
.env file in your project directory:`bash
In your project directory (e.g., /path/to/your/project/.env)
AI_PROVIDER=claude
AI_API_KEY=your-api-key-here
AI_MODEL=claude-3-5-sonnet-20241022Or for OpenAI
AI_PROVIDER=openai
OPENAI_API_KEY=sk-your-key-here
AI_MODEL=gpt-3.5-turbo
`Then load it (if you're using a tool like
dotenv):`bash
Your project can load .env automatically, or use:
export $(cat .env | xargs)
`#### Method 2: Config File
Create
.stressmaster/config/ai-config.json in your project directory:`bash
Your project structure:
your-project/
βββ .stressmaster/
β βββ config/
β βββ ai-config.json # β Created by StressMaster or setup/switch scripts
βββ .env # β Or use this for env vars
βββ package.json
`File location:
.stressmaster/config/ai-config.json in your project directory (where you run stressmaster)#### Method 3: package.json
Add configuration to your project's
package.json:`json
{
"name": "my-project",
"stressmaster": {
"provider": "claude",
"apiKey": "your-api-key",
"model": "claude-3-5-sonnet-20241022"
}
}
`$3
StressMaster automatically creates a configuration file on first use. You can switch between AI providers using simple commands:
#### Quick Provider Switching:
Use the interactive setup wizard to switch providers:
`bash
stressmaster setup
`Or manually edit the configuration file (see below).
#### Manual Configuration:
The AI configuration is stored in
.stressmaster/config/ai-config.json (automatically created on first use):`json
{
"provider": "claude",
"model": "claude-3-5-sonnet-20241022",
"endpoint": "https://api.anthropic.com/v1",
"maxRetries": 3,
"timeout": 30000,
"options": {
"temperature": 0.1
}
}
`#### Provider Setup:
- OpenAI: Get API key from OpenAI and configure via
stressmaster setup
- Claude: Get API key from Anthropic and configure via stressmaster setup
- OpenRouter: Get API key from OpenRouter and configure via stressmaster setup
- Gemini: Get API key from Google AI and configure via stressmaster setup> Note: The
config/ai-config.json file contains API keys and is automatically excluded from git. Use config/ai-config.example.json as a reference.π» CLI Usage
StressMaster provides a powerful command-line interface with natural language processing:
$3
`bash
Show help
stressmaster --help
sm --helpShow version
stressmaster --version
sm --versionInteractive mode
stressmasterRun a test directly
stressmaster "send 10 GET requests to https://httpbin.org/get"Export results
stressmaster export html
sm export json --include-raw
`$3
`bash
Basic GET test
stressmaster "send 5 GET requests to https://httpbin.org/get"POST with JSON payload
stressmaster "POST 20 requests with JSON payload to https://api.example.com/users"Spike test
stressmaster "spike test with 100 requests in 60 seconds to https://api.example.com"Ramp-up test
stressmaster "ramp up from 10 to 100 requests over 2 minutes to https://api.example.com"Stress test
stressmaster "stress test with 500 requests to https://api.example.com"Random burst test
stressmaster "random burst test with 50 requests to https://api.example.com"
`$3
`bash
Export to different formats
stressmaster export json
stressmaster export csv
stressmaster export htmlInclude raw data
stressmaster export json --include-rawInclude recommendations
stressmaster export html --include-recommendations
`$3
When you run
stressmaster without arguments, you enter interactive mode where you can use structured commands:Configuration Commands:
`bash
ββ stressmaster β― config show # Show current configuration
ββ stressmaster β― config set key value # Set configuration value
ββ stressmaster β― config init # Initialize configuration
`File Management:
`bash
ββ stressmaster β― file list # List all files
ββ stressmaster β― file list *.json # List JSON files
ββ stressmaster β― file validate @file.json # Validate file reference
ββ stressmaster β― file search pattern # Search for files
`Results & Export:
`bash
ββ stressmaster β― results list # List recent test results
ββ stressmaster β― results show # Show detailed result
ββ stressmaster β― export json # Export last result as JSON
ββ stressmaster β― export csv # Export as CSV
ββ stressmaster β― export html # Export as HTML report
`OpenAPI Integration:
`bash
ββ stressmaster β― openapi parse @api.yaml # Parse OpenAPI spec
ββ stressmaster β― openapi list @api.yaml # List endpoints
ββ stressmaster β― openapi payloads @api.yaml # Generate payloads
ββ stressmaster β― openapi curl @api.yaml # Generate cURL commands
`File Autocomplete: Press
Tab after typing @ to see file suggestions!$3
-
stressmaster - Full command name
- sm - Short alias for quick commandsπ€ AI Model Setup
StressMaster supports multiple AI model configurations. Choose the setup that best fits your needs:
$3
Use Anthropic Claude directly or via OpenRouter for reliable, high-quality parsing.
$3
Advantages: Better performance, more reliable, no local setup
#### Setup Steps:
1. Get OpenAI API Key:
- Visit OpenAI Platform
- Create a new API key
- Copy the key
2. Configure StressMaster:
`bash
# Edit your .env file
AI_PROVIDER=openai
OPENAI_API_KEY=your-api-key-here
OPENAI_MODEL=gpt-4
# or use gpt-3.5-turbo for cost savings
`3. Test Configuration:
`bash
# Test the API connection
curl -H "Authorization: Bearer your-api-key" \
https://api.openai.com/v1/models
`$3
Advantages: Excellent reasoning, good for complex parsing
#### Setup Steps:
1. Get Anthropic API Key:
- Visit Anthropic Console
- Create a new API key
- Copy the key
2. Configure StressMaster:
`bash
# Edit your .env file
AI_PROVIDER=anthropic
ANTHROPIC_API_KEY=your-api-key-here
ANTHROPIC_MODEL=claude-3-sonnet-20240229
`$3
Advantages: Good performance, competitive pricing
#### Setup Steps:
1. Get Google API Key:
- Visit Google AI Studio
- Create a new API key
- Copy the key
2. Configure StressMaster:
`bash
# Edit your .env file
AI_PROVIDER=gemini
GEMINI_API_KEY=your-api-key-here
GEMINI_MODEL=gemini-pro
`$3
See
.stressmaster/config/ai-config.json and config/ai-config.example.json for up-to-date examples of configuring Claude, OpenRouter, OpenAI, or Gemini.$3
| Provider | Model | Cost | Performance | Setup Complexity |
| --------- | --------------- | ----------------- | ----------- | ---------------- |
| OpenAI | GPT-3.5-turbo | $0.0015/1K tokens | Excellent | Easy |
| OpenAI | GPT-4 | $0.03/1K tokens | Best | Easy |
| Anthropic | Claude 3 Sonnet | $0.003/1K tokens | Excellent | Easy |
| Google | Gemini Pro | $0.0005/1K tokens | Good | Easy |
$3
#### API Key Issues:
`bash
Test OpenAI
curl -H "Authorization: Bearer your-key" \
https://api.openai.com/v1/modelsTest Anthropic
curl -H "x-api-key: your-key" \
https://api.anthropic.com/v1/modelsTest Gemini
curl "https://generativelanguage.googleapis.com/v1beta/models?key=your-key"
`π‘ Usage Examples
$3
`bash
Basic localhost test
stressmaster "send 10 GET requests to http://localhost:3000/api/v1/users"POST with localhost
stressmaster "POST 20 requests to http://localhost:8080/api/orders with JSON body @payload.json"Spike test on local API
stressmaster "spike test with 100 requests in 30 seconds to http://localhost:5000/api/products"Test with headers
stressmaster "send 50 POST requests to http://localhost:3000/api/auth/login with header Content-Type application/json and JSON body @login.json"Increment variables in localhost tests
stressmaster "send 10 POST requests to http://localhost:3000/api/users with JSON body @user-data.json increment userId"
`$3
`bash
Simple GET request
stressmaster "send 50 GET requests to https://api.example.com/users"POST with JSON payload
stressmaster "POST 200 requests to https://api.example.com/orders with JSON body @order.json"POST with inline JSON
stressmaster "POST 10 requests to https://api.example.com/users with JSON body {\"name\":\"test\",\"email\":\"test@example.com\"}"
`$3
`bash
Spike test - sudden load increase
stressmaster "spike test with 1000 requests in 10 seconds to https://api.example.com/products"Stress test with ramp-up
stressmaster "stress test starting with 10 users, ramping up to 500 users over 10 minutes to https://api.example.com/search"Endurance test - long duration
stressmaster "endurance test with 50 constant users for 2 hours to https://api.example.com/health"Volume test - high concurrency
stressmaster "volume test with 500 concurrent users for 5 minutes to https://api.example.com/data"Baseline test - establish baseline
stressmaster "baseline test with 10 requests to https://api.example.com/users"
`$3
#### Constant Load
`
Maintain 100 requests per second to https://api.example.com/data for 10 minutes
`#### Ramp-up Pattern
`
Start with 10 RPS, increase to 200 RPS over 5 minutes, then maintain for 15 minutes
`#### Step Pattern
`
Load test in steps: 50 users for 2 minutes, then 100 users for 2 minutes, then 200 users for 2 minutes
`π§ Configuration
$3
Key configuration options in
.env:`bash
Application settings
NODE_ENV=production
APP_PORT=3000AI Provider settings
AI_PROVIDER=claude
AI_MODEL=claude-3-5-sonnet-20241022API Keys (if using cloud providers)
OPENAI_API_KEY=your-openai-key
ANTHROPIC_API_KEY=your-anthropic-key
GEMINI_API_KEY=your-gemini-keyResource limits
APP_MEMORY_LIMIT=1g
K6_MEMORY_LIMIT=2g
`$3
The AI can generate various payload types:
- Random IDs:
{randomId}, {uuid}
- Timestamps: {timestamp}, {isoDate}
- Random Data: {randomString}, {randomNumber}
- Sequential Data: {sequence}, {counter}Example:
`
POST to https://api.example.com/users with payload:
{
"id": "{uuid}",
"name": "{randomString}",
"email": "user{sequence}@example.com",
"timestamp": "{isoDate}"
}
`π Understanding Results
$3
The tool provides comprehensive metrics:
- Response Times: Min, max, average, and percentiles (50th, 90th, 95th, 99th)
- Throughput: Requests per second and bytes per second
- Error Rates: Success/failure ratios and error categorization
- Resource Usage: CPU and memory consumption during tests
$3
After each test, the AI analyzes results and provides:
- Performance bottleneck identification
- Optimization suggestions
- Capacity planning recommendations
- Error pattern analysis
$3
Results can be exported in multiple formats:
- JSON: Raw data for programmatic analysis
- CSV: Spreadsheet-compatible format
- HTML: Rich visual reports with charts
π οΈ Advanced Usage
$3
#### Authentication Testing
`
Test API with JWT authentication:
1. POST login to get token
2. Use token for subsequent requests
3. Test 500 authenticated requests per minute
`#### Complex JSON Payloads
`
Send POST requests to https://api.example.com/orders with complex JSON:
{
"orderId": "{sequence}",
"customer": {
"name": "{randomString}",
"email": "customer{sequence}@example.com"
},
"items": [
{
"productId": "PROD-{randomNumber}",
"quantity": "{randomNumber:1-10}"
}
]
}
`$3
For high-volume or long-duration tests, ensure you have sufficient system resources:
- Memory: K6 executor may require additional memory for large tests
- Network: Ensure stable internet connection for AI API calls
- Storage: Test results are stored locally in
.stressmaster/ directoryπ Monitoring and Troubleshooting
$3
Verify your setup:
`bash
Check StressMaster installation
stressmaster --versionCheck K6 installation
k6 versionTest AI provider configuration
stressmaster setup
`$3
#### High Memory Usage
Monitor system resources using your OS tools (Activity Monitor on macOS, Task Manager on Windows, htop on Linux).
#### Test Execution Failures
`bash
Verify target API accessibility
curl -I https://your-target-api.comCheck K6 installation
k6 versionVerify AI provider configuration
stressmaster setup
`π Security Considerations
$3
- All API calls use HTTPS
- Input validation on all user inputs
- Secure storage of API keys in configuration files
$3
- API keys stored locally in
.stressmaster/config/ai-config.json (excluded from git)
- Test results stored locally in .stressmaster/ directory
- No data sent to external services except configured AI providersπ Installation Options
$3
`bash
npm install -g stressmaster
`$3
`bash
git clone https://github.com/mumzworld-tech/StressMaster.git
cd StressMaster
npm install
npm run build
npm link
`π€ Contributing
$3
`bash
Clone repository
git clone
cd stressmasterInstall dependencies
npm installRun in development mode
npm run devRun tests
npm test
`$3
`
src/
βββ interfaces/ # User interfaces
β βββ cli/ # Command-line interface
βββ core/ # Core functionality
β βββ parser/ # AI command parsing
β βββ generator/ # K6 script generation
β βββ executor/ # Test execution
β βββ analyzer/ # Results analysis
βββ types/ # TypeScript definitions
βββ utils/ # Utility functions
``This project is licensed under the MIT License - see the LICENSE file for details.
For support and questions:
- Review the examples in this README for usage patterns
- Check the CHANGELOG.md for recent updates
- Open an issue on GitHub for bugs or feature requests