Drop‑in real‑time voice agent pipeline (VAD → STT → LLM → TTS) for Next.js + LiveKit
npm install @zerolumenlabs/voice-agentpackages/voice-agent/README.md
A small library providing a fully wired voice agent pipeline for Next.js applications. It glues together LiveKit, voice activity detection (VAD), speech‑to‑text (STT), large language model (LLM) chat, and text‑to‑speech (TTS).
The package exports separate client and server utilities so that browser code and API routes can remain tree‑shakeable.
- LivekitAdapter – connects to a LiveKit room, publishes microphone audio and exposes raw 48 kHz PCM frames.
- VADProcessor – wraps @ricky0123/vad-web for browser‑side voice activity detection. Emits 16 kHz audio buffers when speech ends.
- STTClient – server side class using Google Cloud Speech to transcribe LINEAR16 audio or Float32Array data.
- LLMClient – convenience wrapper around Google Gemini / Vertex AI for chat style interactions. Maintains in‑memory history.
- TTSClient – server side text‑to‑speech via Google Cloud. Returns a buffer containing encoded audio (MP3 by default).
- createLivekitTokenRoute – helper to create a GET handler for Next.js API routes that issues LiveKit access tokens.
The package lives inside this repository. From the repo root run:
``bash`
npm install@zerolumenlabs/voice-agent
it will be available as in your workspace.
Certain environment variables are required depending on which parts you use:
| Variable | Purpose |
| -------- | ------- |
| LIVEKIT_API_KEY / LIVEKIT_API_SECRET | Credentials for creating LiveKit tokens |NEXT_PUBLIC_LIVEKIT_URL
| | Public WebSocket URL for your LiveKit deployment |GOOGLE_API_KEY
| | API key for Google Generative AI (Gemini) |GOOGLE_GENAI_USE_VERTEXAI
| | Set to true to use Vertex AI instead of the public API |GOOGLE_CLOUD_PROJECT
| and GOOGLE_APPLICATION_CREDENTIALS | Required when using Vertex AI and the Google Cloud SDK clients |GOOGLE_CLOUD_LOCATION
| | Vertex AI region (defaults to us-central1) |
Google STT and TTS also rely on standard Google Cloud credentials (for example via GOOGLE_APPLICATION_CREDENTIALS).
Create an API route for issuing LiveKit tokens:
`ts
// src/app/api/voice/token/route.ts
import { createLivekitTokenRoute } from '@zerolumenlabs/voice-agent/server';
export const GET = createLivekitTokenRoute({
apiKey: process.env.LIVEKIT_API_KEY!,
apiSecret: process.env.LIVEKIT_API_SECRET!,
livekitUrl: process.env.NEXT_PUBLIC_LIVEKIT_URL!,
});
`
Handle the voice interaction in another route:
`ts
import { NextResponse } from 'next/server';
import { LLMClient, STTClient, TTSClient } from '@zerolumenlabs/voice-agent/server';
export async function POST(req: Request) {
const { identity, pcmBase64 } = await req.json();
const pcm = Buffer.from(pcmBase64, 'base64');
// 1. speech‑to‑text
const stt = new STTClient();
const userText = await stt.transcribe(pcm);
// 2. language model
const llm = new LLMClient({ systemPrompt: 'You are an assistant.' });
const reply = await llm.chat(userText);
// 3. text‑to‑speech
const tts = new TTSClient();
const audio = await tts.speak(reply);
return NextResponse.json({
text: reply,
audioBase64: audio.toString('base64'),
});
}
`
Connect to LiveKit and start the VAD processor:
`ts
import { LivekitAdapter, VADProcessor } from '@zerolumenlabs/voice-agent/client';
const adapter = new LivekitAdapter();
await adapter.connect(livekitUrl, token);
const vad = await VADProcessor.create(async (audio16k) => {
const pcm = new Int16Array(audio16k.length);
for (let i = 0; i < audio16k.length; i++) {
let s = Math.max(-1, Math.min(1, audio16k[i]));
pcm[i] = s < 0 ? s 0x8000 : s 0x7fff;
}
const base64 = Buffer.from(pcm.buffer).toString('base64');
await fetch('/api/voice', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ identity, pcmBase64: base64 }),
});
});
vad.start();
`
VADProcessor.create accepts optional callbacks such as onSpeechStart and returns a processor that can be started, paused or destroyed.
Run npm run build from the package directory to compile both the client and server bundles under dist/`.