A privacy-respecting, lightweight Node.js package to analyze HTTP requests and calculate a risk score (0-100) indicating the likelihood of the request being a bot or automated script.
npm install request-risk-scorebash
npm install request-risk-score
`
Usage
`javascript
const { analyzeRequest } = require('request-risk-score');
// In your request handler (Express, HTTP, etc.)
app.use((req, res, next) => {
const risk = analyzeRequest(req);
if (risk.bucket === 'high') {
console.log(Blocked suspicious request from ${risk.ip}. Score: ${risk.score});
console.log('Signals:', risk.signals);
return res.status(403).send('Request verification failed.');
}
// Add risk info to request for downstream logic
req.risk = risk;
next();
});
`
$3
`json
{
"score": 78,
"bucket": "high",
"signals": [
"no_user_agent",
"rate_limit_exceeded",
"regular_request_timing"
],
"ip": "203.0.113.10"
}
`
Decision Buckets
| Score | Bucket | Recommendation |
|-------|--------|----------------|
| 0-39 | likely_human | Likely Human. Allow. |
| 40-69 | suspicious | Suspicious. Monitor or CAPTCHA? |
| 70-100| likely_automated | Likely Bot. Block or Challenge. |
Configuration
You can pass an options object to analyzeRequest:
`javascript
const options = {
enableTorCheck: false, // Default: false (requires external list)
rateLimitWindowMs: 60000,
rateLimitMaxRequestPerWindow: 100,
ip: req.ip // Manually pass IP if using proxy
};
const result = analyzeRequest(req, options);
`
Signals Analyzed
1. Network: Bogus IPs, IPs that look local (in production context).
2. Headers: Missing standard headers, bad User-Agent patterns (curl, wget), presence of browser-specific headers.
3. Behavior:
* Rate Limiting: Sliding window counter.
* Path Entropy: Detects random scanning paths (e.g. /admin/w8x7e9...).
* Sensitive Paths: Flags access to known protected paths (e.g. /admin, /login`).