AI-Powered Website Analysis Platform - 100% Free Forever
npm install subto-oneComprehensive Website Analysis & AI Code Surgery - 100% Free Forever
- Deep Runtime Analysis: Full Playwright-based crawling with JavaScript execution
- Network Interception: Captures all requests, responses, and timing data
- Interaction Simulation: Programmatically tests buttons, inputs, and interactive elements
- Lighthouse Integration: Performance, accessibility, SEO, and best practices scoring
- SEO & Markup Validation: W3C Validator, Google Mobile-Friendly Test integration
- Performance Testing: Google PageSpeed, WebPageTest, GTmetrix integration
- Security Audit: Mozilla Observatory, Security Headers, SSL Labs, Safe Browsing integration
- Malware Detection: VirusTotal, Hybrid Analysis, URLScan.io support
- AI API Selection: Ask AI to choose the best scanning API for your needs
- No-JS Differential: Compares JS-enabled vs disabled behavior
- AI Code Surgeon: Upload your code and get AI-powered fixes
- Node.js 18+
- npm or yarn
``bashInstall dependencies
npm install
$3
Edit
.env and add your OpenRouter API key for AI features:`
OPENROUTER_API_KEY=your_key_here
`Get a free API key at OpenRouter
$3
Add these to your
.env for enhanced scanning capabilities (all have generous free tiers):`
Performance & Speed (Scalable Free Options)
GOOGLE_PAGESPEED_API_KEY=your_key # Google PageSpeed Insights
WEBPAGETEST_API_KEY=your_key # WebPageTest (public instance available)
GOOGLE_MOBILE_FRIENDLY_API_KEY=your_key # Google Mobile-Friendly TestSEO & Markup Validation
W3C Markup Validator (no key needed) # HTML validation for SEO
Security & Malware (No/Low Limits)
VIRUSTOTAL_API_KEY=your_key # VirusTotal (500 requests/day free)
GOOGLE_SAFE_BROWSING_API_KEY=your_key # Google Safe Browsing
URLSCAN_API_KEY=your_key # URLScan.io (free tier)
HYBRID_ANALYSIS_API_KEY=your_key # Hybrid Analysis (free tier)Always Free (No API Keys Needed)
Mozilla Observatory, Security Headers, SSL Labs
`All APIs have free tiers. See
.env.example for details.$3
`bash
Development
npm run devProduction
npm start
`Visit
http://localhost:3000Architecture
`
quantumreasoning/
├── public/ # Frontend assets
│ ├── index.html # Single-page app
│ ├── styles.css # Exact styling spec
│ └── app.js # Frontend logic
├── server/
│ ├── index.js # Express server + WebSocket
│ └── modules/
│ ├── scan-pipeline.js # 7-phase analysis engine
│ ├── ai-analyzer.js # OpenRouter AI integration
│ ├── file-manager.js # Upload/ZIP handling
│ └── data-store.js # In-memory storage
├── package.json
└── .env.example
`Scan Pipeline
1. Fetch Initial HTML - Loads page with Playwright
2. Execute JavaScript Runtime - Captures all JS files and builds AST
3. Intercept Network Requests - Records all network activity
4. Simulate Interactions - Hovers, clicks, types on interactive elements
5. Run Lighthouse - Google PageSpeed Insights API
6. Audit Security - Mozilla Observatory + OWASP checks
7. No-JS Differential - Compares behavior without JavaScript
API Endpoints
$3
`
POST /api/v1/scan
Body: { "url": "https://subto.one" }
Response: { "scanId": "uuid", "status": "started" }
`Notes on queuing and rate limiting:
- Concurrency limit: the server allows up to
MAX_CONCURRENT_SCANS (default 50) scans to run concurrently.
- If the server is at capacity, API clients can opt into an automatic queue by sending the header X-Accept-Queue: true with the POST request. In that case the request will be accepted and queued; the response will be 202 Accepted with JSON { scanId, status: 'queued', queuePosition }.
- If the client does not opt into queuing and the server is at capacity, the API will return 429 Too Many Requests with a short message instructing the client to retry or set X-Accept-Queue: true.
- The UI automatically sets X-Accept-Queue: true and displays Queuing plus the user's position (e.g., You are #3 in queue).
$3
`
GET /api/v1/scan/:scanId
Response: Full scan data
`$3
`
POST /api/v1/ai/analyze
Body: { "scanId": "uuid", "files": [...] }
Response: { "summary": "...", "changes": [...] }
`File Upload
$3
- JavaScript: .js, .ts, .jsx, .tsx
- Styles: .css, .scss
- Markup: .html, .vue, .svelte
- Data: .json, .env, .md$3
- Single file: 50 MB max
- Total upload: 250 MB max
- File count: 5,000 files max$3
- node_modules/
- .git/
- .next/, dist/, build/AI Models (Free Tier)
- Default:
deepseek/deepseek-r1:free
- Code Analysis: qwen/qwen3-coder:free
- Security: mistralai/devstral-small:free$3
- 5 seconds between requests
- 50 requests per dayData Retention
All scan data is automatically deleted after 24 hours. No user accounts required.
Security
- No binary file uploads
- Server-side validation even if client-side passes
- Streaming uploads to disk, not memory
- Directory traversal prevention
- MIME type validation
License
MIT
Deployment (Docker + nginx reverse-proxy)
This repo includes a simple production-ready scaffold under
deploy/ that runs the Node app behind an nginx reverse-proxy which terminates TLS and forwards requests (including WebSocket upgrades) to the app.Quick local test (self-signed cert):
`bash
cd deploy
./mk-self-signed.sh # creates deploy/certs/fullchain.pem and privkey.pem
docker compose up --build
``Production notes:
- Use Node.js 22+ in your runtime (required by Lighthouse v13).
- Terminate TLS at your load balancer (Cloud Load Balancer, nginx, etc.) and forward plain HTTP to the Node container.
- Ensure WebSocket upgrades are forwarded by the proxy.
- Provide secrets via environment variables or your hosting secret manager (do NOT commit secrets).