Zero-downtime deployment proxy - routes traffic through a stable port to internal app instances with health-gated switching
npm install portokA lightweight Node.js "switchboard" proxy that enables zero-downtime deployments by routing a stable public port to an internal app instance running on a random port, switching only when the new instance is healthy.
- Zero-downtime switching: Health-gated port switching with connection draining
- Auto-rollback: Automatic rollback if the new port becomes unhealthy
- WebSocket support: Full HTTP and WebSocket proxying
- Lightweight metrics: Built-in metrics without heavy dependencies
- Security: Token-based auth, IP allowlist, rate limiting
- CLI: Easy-to-use command-line interface
Global Installation (Recommended):
``bashInstall globally
npm install -g portok
$3
`bash
Required environment variables
export LISTEN_PORT=3000
export INITIAL_TARGET_PORT=8080
export ADMIN_TOKEN=your-secret-tokenStart portokd
node portokd.js
`$3
`bash
Check status
portok status --token your-secret-tokenSwitch to new port
portok switch 8081 --token your-secret-tokenCheck metrics
portok metrics --token your-secret-tokenCheck health
portok health --token your-secret-token
`> Note: If not installed globally, use
./portok.js instead of portok.Configuration
All configuration is via environment variables:
| Variable | Default | Description |
|----------|---------|-------------|
|
INSTANCE_NAME | default | Instance identifier (for logging/state file naming) |
| LISTEN_PORT | 3000 | Port the proxy listens on |
| INITIAL_TARGET_PORT | (required) | Initial backend port to proxy to |
| STATE_FILE | /var/lib/portok/ | Path to persist state |
| HEALTH_PATH | /health | Health check endpoint path |
| HEALTH_TIMEOUT_MS | 5000 | Health check timeout |
| DRAIN_MS | 30000 | Connection drain period after switch |
| ROLLBACK_WINDOW_MS | 60000 | Time window for auto-rollback monitoring |
| ROLLBACK_CHECK_EVERY_MS | 5000 | Health check interval during rollback window |
| ROLLBACK_FAIL_THRESHOLD | 3 | Consecutive failures before rollback |
| ADMIN_TOKEN | (required) | Token for admin endpoint authentication |
| ADMIN_ALLOWLIST | 127.0.0.1,::1 | Comma-separated list of allowed IPs |
| ADMIN_UNIX_SOCKET | (optional) | Unix socket path for admin endpoints |$3
| Variable | Default | Description |
|----------|---------|-------------|
|
FAST_PATH | 0 | Enable minimal metrics mode for maximum throughput |
| UPSTREAM_KEEPALIVE | 1 | Enable keep-alive for upstream connections (critical) |
| UPSTREAM_MAX_SOCKETS | 1024 | Maximum sockets per upstream host |
| UPSTREAM_KEEPALIVE_MSECS | 1000 | Keep-alive ping interval in ms |
| SERVER_KEEPALIVE_TIMEOUT | 5000 | Server keep-alive timeout in ms |
| SERVER_HEADERS_TIMEOUT | 6000 | Headers timeout (must be > keepAliveTimeout) |
| ENABLE_XFWD | 1 | Add X-Forwarded-* headers to proxied requests |
| DEBUG_UPSTREAM | 0 | Track upstream socket creation in /__metrics |
| VERBOSE_ERRORS | 0 | Log full error stacks (disable in production) |Admin Endpoints
All admin endpoints require the
x-admin-token header.$3
Returns current proxy status.
`bash
curl -H "x-admin-token: your-token" http://127.0.0.1:3000/__status
`Response:
`json
{
"activePort": 8080,
"drainUntil": null,
"lastSwitch": {
"from": 8081,
"to": 8080,
"at": "2024-01-15T10:30:00.000Z",
"reason": "manual",
"id": "uuid-here"
}
}
`$3
Returns proxy metrics.
`bash
curl -H "x-admin-token: your-token" http://127.0.0.1:3000/__metrics
`Response:
`json
{
"startedAt": "2024-01-15T10:00:00.000Z",
"inflight": 5,
"inflightMax": 100,
"totalRequests": 50000,
"totalProxyErrors": 2,
"statusCounters": {
"2xx": 49500,
"3xx": 100,
"4xx": 398,
"5xx": 2
},
"rollingRps60": 125.5,
"health": {
"activePortOk": true,
"lastCheckedAt": "2024-01-15T10:29:55.000Z",
"consecutiveFails": 0
},
"lastProxyError": null
}
`$3
Switch to a new target port. Performs health check before switching.
`bash
curl -X POST -H "x-admin-token: your-token" \
"http://127.0.0.1:3000/__switch?port=8081"
`Success response (200):
`json
{
"success": true,
"message": "Switched to port 8081",
"switch": {
"from": 8080,
"to": 8081,
"at": "2024-01-15T10:30:00.000Z",
"reason": "manual",
"id": "uuid-here"
}
}
`Failure response (409 - health check failed):
`json
{
"error": "Health check failed",
"message": "Port 8081 did not respond with 2xx at /health"
}
`$3
Check health of the current active port.
`bash
curl -H "x-admin-token: your-token" http://127.0.0.1:3000/__health
`CLI Reference
`
portok [options]Management Commands:
init Initialize portok (creates dirs, installs systemd service)
add Create a new service instance
remove Remove a service instance (stops, disables, deletes config/state)
clean Remove ALL portok data (configs, states, systemd service)
list List all configured instances and their status
Service Control Commands:
start Start a portok service (systemctl start portok@)
stop Stop a portok service
restart Restart a portok service
enable Enable service at boot
disable Disable service at boot
logs Show service logs (journalctl)
Operational Commands:
status Show current proxy status
metrics Show proxy metrics
switch Switch to a new target port
health Check health of active port
Options:
--url Daemon URL (default: http://127.0.0.1:3000)
--instance Target instance by name (reads /etc/portok/.env)
--token Admin token (or PORTOK_TOKEN env var)
--json Output as JSON
--help Show help
Options for 'add' command:
--port Listen port (default: random 3000-3999)
--target Target port (default: random 8000-8999)
--health Health check path (default: /health)
--force Overwrite existing config
Options for 'remove' command:
--force Skip confirmation prompt
--keep-state Keep state file (/var/lib/portok/.json)
Options for 'clean' command:
--force Skip confirmation prompt (required)
Options for 'logs' command:
--follow, -f Follow log output
--lines, -n Number of lines to show (default: 50)
`$3
`bash
1. Initialize portok (creates /etc/portok and /var/lib/portok)
sudo portok init2. Create a new service
sudo portok add api --port 3001 --target 80013. Start the service
sudo portok start api4. Enable at boot
sudo portok enable api5. Check status
portok status --instance api6. List all services
portok list
`$3
-
PORTOK_URL: Default daemon URL
- PORTOK_TOKEN: Admin token$3
`bash
Initialize portok (run once)
sudo portok initCreate services with specific ports
sudo portok add api --port 3001 --target 8001
sudo portok add web --port 3002 --target 8002Service management
sudo portok start api
sudo portok stop api
sudo portok restart api
sudo portok enable api # Enable at boot
sudo portok disable api # Disable at bootRemove a service (stops, disables, removes config and state)
sudo portok remove api --force
sudo portok remove api --force --keep-state # Keep state fileView logs
portok logs api
portok logs api --follow # Follow log output
portok logs api -n 100 # Show last 100 linesList all instances with status
portok listCheck status by instance name
portok status --instance apiGet metrics as JSON
portok metrics --instance api --jsonSwitch to new port
portok switch 8081 --instance apiHealth check (exits 0 if healthy, 1 if unhealthy)
portok health --instance api && echo "OK" || echo "FAIL"Direct URL mode (without instance)
portok status --url http://127.0.0.1:3000 --token your-token
`systemd Service
Example systemd unit file (
/etc/systemd/system/portokd.service):`ini
[Unit]
Description=Portok Zero-Downtime Proxy
After=network.target[Service]
Type=simple
User=www-data
WorkingDirectory=/opt/portok
ExecStart=/usr/bin/node /opt/portok/portokd.js
Restart=always
RestartSec=5
Environment
Environment=LISTEN_PORT=3000
Environment=INITIAL_TARGET_PORT=8080
Environment=STATE_FILE=/var/lib/portok/state.json
Environment=ADMIN_TOKEN=your-secret-token
Environment=HEALTH_PATH=/health
Environment=DRAIN_MS=30000
Environment=ROLLBACK_WINDOW_MS=60000
Environment=ROLLBACK_CHECK_EVERY_MS=5000
Environment=ROLLBACK_FAIL_THRESHOLD=3Security hardening
NoNewPrivileges=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/var/lib/portok[Install]
WantedBy=multi-user.target
`Enable and start:
`bash
sudo systemctl daemon-reload
sudo systemctl enable portokd
sudo systemctl start portokd
`Testing
Tests run in Docker for Linux compatibility:
`bash
Run all tests
docker compose run --rm testRun specific test file
docker compose run --rm test npm test -- --test-name-pattern="proxy"Development shell
docker compose run --rm dev
`Or run locally (requires Node.js 20+):
`bash
npm test
`Benchmarks
Benchmarks measure proxy performance:
`bash
Run all benchmarks in Docker
docker compose run --rm benchQuick benchmark (shorter duration)
docker compose run --rm bench npm run bench -- --quickOutput JSON for CI
docker compose run --rm bench npm run bench -- --json > results.json
`Benchmark scenarios:
| Benchmark | Description |
|-----------|-------------|
| Throughput | Maximum requests/sec with 100 connections |
| Latency | Latency percentiles (p50, p95, p99) |
| Connections | Scaling with 10-500 concurrent connections |
| Switching | Switch latency and request loss |
| Baseline | Direct vs proxied overhead comparison |
| Keep-Alive | Validates keep-alive performance (RPS >= 70% of direct) |
Multi-Instance Setup
Portok supports running multiple isolated instances on the same host, each managing a different application. This is the recommended approach for multi-app deployments.
$3
`
/etc/portok/
├── api.env # Config for "api" instance
├── web.env # Config for "web" instance
└── worker.env # Config for "worker" instance/var/lib/portok/
├── api.json # State file for "api" instance
├── web.json # State file for "web" instance
└── worker.json # State file for "worker" instance
`$3
Each instance has its own env file at
/etc/portok/:Example:
/etc/portok/api.env
`bash
Required
LISTEN_PORT=3001
INITIAL_TARGET_PORT=8001
ADMIN_TOKEN=api-secret-token-change-meOptional (defaults shown)
HEALTH_PATH=/health
HEALTH_TIMEOUT_MS=5000
DRAIN_MS=30000
ROLLBACK_WINDOW_MS=60000
ROLLBACK_CHECK_EVERY_MS=5000
ROLLBACK_FAIL_THRESHOLD=3
`Example:
/etc/portok/web.env
`bash
LISTEN_PORT=3002
INITIAL_TARGET_PORT=8002
ADMIN_TOKEN=web-secret-token-change-me
`$3
The
portok init command automatically installs and configures the systemd template:`bash
Initialize Portok (creates dirs, installs systemd service)
sudo portok initCreate a new instance
sudo portok add api --port 3001 --target 8001Start and enable
sudo portok start api
sudo portok enable apiCheck status
sudo portok status api
portok logs api --follow
`#### Node.js Installation Methods
System-wide Node.js (Recommended for Production):
`bash
Install via OS package manager
sudo apt install nodejs # Debian/Ubuntu
sudo yum install nodejs # RHEL/CentOSStandard init
sudo portok init
`nvm/fnm/volta (Development or when system node unavailable):
`bash
Use --nvm flag for less restrictive security settings
sudo portok init --nvmOr specify custom node path
sudo portok init --node-path=/custom/path/to/node
`Preview changes before applying:
`bash
sudo portok init --dry-run
`#### Diagnosing Issues
Use
portok doctor to check your installation:`bash
portok doctor
`Example output:
`
Portok Doctor - Diagnosing installation... ✓ Node.js binary
/usr/local/bin/node (v20.19.6)
✓ System Node.js
/usr/local/bin/node (v20.19.6)
✓ Config directory
/etc/portok (2 config files)
✓ State directory
/var/lib/portok (writable)
✓ systemd service
/etc/systemd/system/portok@.service
✓ ExecStart node
/usr/local/bin/node
✓ ProtectHome
ProtectHome=true
✓ systemctl
Available
✓ Running instances
2 running, 0 failed
All checks passed. Portok is ready to use.
`#### Security Hardening
The default
portok@.service includes production-grade security settings:| Setting | Value | Purpose |
|---------|-------|---------|
| ProtectHome | true | Block access to home directories |
| ProtectSystem | strict | Make /usr, /boot, /etc read-only |
| PrivateTmp | true | Private /tmp per service |
| NoNewPrivileges | true | Prevent privilege escalation |
| ReadWritePaths | /var/lib/portok | Only state directory writable |
For nvm users,
portok init --nvm uses ProtectHome=read-only to allow ~/.nvm access.#### Manual systemd Setup (Advanced)
For manual setup without using
portok init:`bash
Copy the template (choose one)
sudo cp portok@.service /etc/systemd/system/ # Production (system node)
sudo cp portok@.service.nvm /etc/systemd/system/portok@.service # NVM variantEdit paths in the service file
sudo nano /etc/systemd/system/portok@.service
Update: ExecStart, WorkingDirectory, User, Group
Create directories
sudo mkdir -p /etc/portok /var/lib/portok
sudo chown $(whoami):$(id -gn) /var/lib/portokReload systemd
sudo systemctl daemon-reloadStart instances
sudo systemctl start portok@api
`$3
Use
--instance to target a specific instance:`bash
Target by instance name (reads /etc/portok/.env)
portok status --instance api
portok metrics --instance web
portok switch 8081 --instance api
portok health --instance web --jsonExplicit URL/token still works (and overrides env file)
portok status --url http://127.0.0.1:3001 --token api-secret-token
`$3
Each instance is fully isolated:
- Separate state files: No shared state between instances
- Separate tokens: Each instance has its own admin token
- Separate metrics: Metrics are per-instance
- Separate rollback monitors: Each instance tracks its own rollback window
- Separate rate limits: Rate limiting is per-instance
$3
`
┌─────────────────────────────────────────────────────────────┐
│ Load Balancer / Nginx │
└───────────┬─────────────────────────┬───────────────────────┘
│ │
▼ ▼
┌───────────────────────┐ ┌───────────────────────┐
│ portok@api (:3001) │ │ portok@web (:3002) │
│ State: api.json │ │ State: web.json │
└───────────┬───────────┘ └───────────┬───────────┘
│ │
▼ ▼
┌───────────────────────┐ ┌───────────────────────┐
│ api-v1 (:8001) │ │ web-v1 (:8002) │
│ api-v2 (:8011) │ │ web-v2 (:8012) │
└───────────────────────┘ └───────────────────────┘
`Architecture
`
┌─────────────────────────────────────────────────────────┐
│ Client Traffic │
└─────────────────────────┬───────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────┐
│ portokd (LISTEN_PORT) │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────────┐ │
│ │ Proxy │ │ Admin │ │ Health Monitor │ │
│ │ (http-proxy)│ │ Endpoints │ │ (auto-rollback)│ │
│ └──────┬──────┘ └─────────────┘ └─────────────────┘ │
│ │ │
│ ┌──────┴──────┐ │
│ │Socket Tracker│ ← Maps connections to ports for drain │
│ └──────┬──────┘ │
└─────────┼───────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────┐
│ 127.0.0.1:ACTIVE_PORT │
│ (Your App) │
└─────────────────────────────────────────────────────────┘
`Zero-Downtime Deployment Flow
1. Deploy new app version on a random port (e.g., 54321)
2. New app starts and exposes
/health endpoint
3. Call portok switch 54321 or POST /__switch?port=54321
4. Portok health-checks the new port
5. If healthy: switch traffic, drain old connections
6. If new port fails during rollback window: auto-rollback
7. Old app can be stopped after drain periodPerformance Notes
Portok is optimized for high throughput and low latency proxy operations.
$3
Enable
FAST_PATH=1 for maximum throughput in production or benchmarks:`bash
export FAST_PATH=1
`This disables expensive metrics (status counters, rolling RPS) while keeping essential counters (totalRequests, inflight, proxyErrors).
$3
The upstream keep-alive agent is critical for performance. Without it, every request opens a new TCP connection which adds ~0.5-2ms latency and significantly limits throughput.
Keep-alive is enabled by default (
UPSTREAM_KEEPALIVE=1). Do not disable it in production.$3
Run the validation benchmark to verify performance:
`bash
Quick validation (3s)
FAST_PATH=1 node bench/validate.js --quickFull validation (10s)
FAST_PATH=1 node bench/validate.jsManual autocannon test
Direct:
npx autocannon -c 50 -d 10 http://127.0.0.1:/Proxied:
npx autocannon -c 50 -d 10 http://127.0.0.1:/
`Acceptance Criteria:
- RPS >= 30% of direct (http-proxy adds inherent overhead)
- Added p50 latency <= 10ms
- p99 latency <= 50ms
Typical Results (localhost, FAST_PATH=1):
- Direct: ~28,000 RPS, p50=1ms
- Proxied: ~13,000 RPS, p50=3ms
- Socket reuse: 800-2000x (confirms keep-alive working)
$3
Enable
DEBUG_UPSTREAM=1 to track upstream socket creation in /__metrics:`bash
export DEBUG_UPSTREAM=1
`This exposes
upstreamSocketsCreated in metrics to verify keep-alive is working.$3
| Optimization | Impact |
|--------------|--------|
| FAST_PATH mode | Minimal per-request overhead |
| Keep-alive agent | 10-20x throughput vs no keep-alive |
| Connection header stripping | Ensures upstream keep-alive works |
| Minimal URL parsing | No allocations in hot path |
| res.once() listeners | Auto-cleanup, no memory leaks |
| Socket reuse tracking | DEBUG_UPSTREAM confirms keep-alive |
Security
- Token authentication: All admin endpoints require
x-admin-token header
- Timing-safe comparison: Token validation uses crypto.timingSafeEqual
- IP allowlist: Admin endpoints restricted by IP (default: localhost only)
- Rate limiting: 10 requests/minute per IP for admin endpoints
- SSRF protection: Target host fixed to 127.0.0.1`MIT