Secure WebSocket proxy server for Microsoft Foundry Voice Live API with Express and Docker support
npm install @iloveagents/foundry-voice-live-proxy-node


Secure WebSocket proxy for Azure AI Foundry Voice Live API. Supports Voice, Avatar, and Agent modes.
Why use this proxy? Browser WebSockets cannot send Authorization headers, and Azure AI Foundry endpoints require them. This proxy injects credentials server-side and forwards messages transparently.
npm:
``bash`
npm install @iloveagents/foundry-voice-live-proxy-node
Docker:
`bash`
docker pull ghcr.io/iloveagents/foundry-voice-live-proxy:latest
1. Configure environment
`bash`
# Create .env file
cat > .env << 'EOF'
FOUNDRY_RESOURCE_NAME=your-resource-name
FOUNDRY_API_KEY=your-api-key
EOF
2. Run the proxy
`bash
# With Docker (recommended)
docker run -p 8080:8080 --env-file .env ghcr.io/iloveagents/foundry-voice-live-proxy:latest
# Or with npm
npx @iloveagents/foundry-voice-live-proxy-node
`
3. Verify it's running
`bash`
curl http://localhost:8080/health
4. Connect from your app
`typescript`
const ws = new WebSocket("ws://localhost:8080/ws");
> Region Availability: The default model (gpt-realtime) is only available in East US 2 and Sweden Central regions. Make sure your Azure AI Foundry resource is deployed in one of these regions. See Microsoft docs for current availability.
Copy .env.example to .env and configure:
`bashRequired
FOUNDRY_RESOURCE_NAME=your-resource-name
Authentication Modes
$3
Best for: demos, internal tools, trusted environments.
`typescript
// Frontend - no token needed
const ws = new WebSocket("ws://localhost:8080/ws");
``bash
Backend .env
FOUNDRY_RESOURCE_NAME=your-resource
FOUNDRY_API_KEY=your-api-key # Secured server-side
`$3
Best for: enterprise apps, per-user auditing, SSO.
`typescript
// Frontend - acquire and pass token
const token = await msalInstance.acquireTokenSilent({
scopes: ["https://ai.azure.com/.default"],
});
const ws = new WebSocket(ws://localhost:8080/ws?token=${token.accessToken});
``bash
Backend .env
FOUNDRY_RESOURCE_NAME=your-resource
No API key - uses client's MSAL token
`Setup:
1. Create Azure App Registration with
https://ai.azure.com/.default scope
2. Assign "Cognitive Services User" role on your AI Foundry resource
3. Configure MSAL in your frontend app$3
Best for: custom agents built in Azure AI Foundry.
`typescript
// Frontend - pass agentId, projectName, and token
const token = await msalInstance.acquireTokenSilent({
scopes: ["https://ai.azure.com/.default"],
});
const ws = new WebSocket(
ws://localhost:8080/ws?agentId=asst_abc123&projectName=my-project&token=${token.accessToken}
);
``bash
Backend .env
FOUNDRY_RESOURCE_NAME=your-resource
agentId and projectName come from client URL
`Mode detection is automatic: Agent mode activates when both
agentId and projectName are present.Deployment
$3
`bash
cp .env.example .env
Edit .env with your values
docker-compose up -d
`$3
`bash
docker build -t foundry-voice-live-proxy .
docker run -p 8080:8080 --env-file .env foundry-voice-live-proxy
`$3
`bash
docker pull ghcr.io/iloveagents/foundry-voice-live-proxy:latest
docker run -p 8080:8080 \
-e FOUNDRY_RESOURCE_NAME=your-resource \
-e FOUNDRY_API_KEY=your-key \
ghcr.io/iloveagents/foundry-voice-live-proxy:latest
`$3
`bash
npm install -g pm2
pm2 start node_modules/@iloveagents/foundry-voice-live-proxy-node/dist/index.js --name voice-proxy
pm2 save && pm2 startup
`$3
`bash
az containerapp create \
--name voice-proxy \
--resource-group your-rg \
--environment your-env \
--image ghcr.io/iloveagents/foundry-voice-live-proxy:latest \
--target-port 8080 \
--ingress external \
--env-vars FOUNDRY_RESOURCE_NAME=your-resource FOUNDRY_API_KEY=your-key
`API Reference
$3
| Endpoint | Method | Description |
| --------- | ------ | -------------------------- |
|
/ | GET | API info and version |
| /health | GET | Health check (for probes) |
| /ws | WS | WebSocket proxy connection |$3
| Parameter | Required | Description | Example |
| ------------- | ----------- | -------------------------------- | -------------- |
|
token | Conditional | MSAL access token | eyJ0eXAi... |
| agentId | Conditional | Agent ID (enables Agent mode) | asst_123xyz |
| projectName | Conditional | Project name (with agentId) | my-project |
| model | No | Model override | gpt-realtime |$3
| Variable | Required | Default | Description |
| -------------------------- | ----------- | ----------------------- | ---------------------------- |
|
FOUNDRY_RESOURCE_NAME | Yes | - | Azure AI Foundry resource |
| FOUNDRY_API_KEY | Conditional | - | API key (if not using MSAL) |
| PORT | No | 8080 | Server port |
| API_VERSION | No | 2025-10-01 | Azure API version |
| ALLOWED_ORIGINS | No | http://localhost:3000 | CORS origins (comma-sep) |
| RATE_LIMIT_MAX_REQUESTS | No | 100 | Max requests per window |
| RATE_LIMIT_WINDOW_MS | No | 60000 | Rate limit window (ms) |
| MAX_CONNECTIONS | No | 1000 | Max concurrent connections |$3
`json
{
"status": "ok",
"activeConnections": 5,
"maxConnections": 1000,
"timestamp": "2025-01-15T10:30:00.000Z"
}
`Troubleshooting
| Error | Solution |
| ----- | -------- |
| Connection fails | Check
.env values, verify with curl http://localhost:8080/health |
| "Blocked by CORS" | Add your origin to ALLOWED_ORIGINS |
| "Too many requests" | Rate limit hit - wait or increase RATE_LIMIT_MAX_REQUESTS |
| "Missing token" | Agent mode requires MSAL token in URL |
| "API key required" | Standard mode needs FOUNDRY_API_KEY` or client MSAL token |If this library made your life easier, a coffee is a simple way to say thanks ☕
It directly supports maintenance and future features.

MIT - Made with 💜 by iLoveAgents