Automatic JSONL logging for LLM APIs. Drop-in wrappers for OpenAI SDK, OpenRouter SDK, or raw fetch calls.
npm install vottur

Automatic JSONL logging for LLM APIs. Wrap your OpenAI or OpenRouter client, and every request gets logged to a file. No code changes needed.
Works with OpenAI, OpenRouter, Azure, Ollama, or anything OpenAI-compatible. "Vottur" means "witness" in Icelandic.
Vottur wraps your SDK client and intercepts every API call. When you call chat.completions.create(), it:
1. Captures the request (model, messages, parameters)
2. Passes it through to the real SDK
3. Captures the response (content, tokens, latency)
4. Logs everything to a JSONL file (fire-and-forget, never awaited)
Your code stays exactly the same. The response object is unchanged. Vottur just watches and logs.
``bash`
npm install vottur openai
`typescript
import { createClient } from 'vottur';
const client = createClient({
apiKey: process.env.OPENAI_API_KEY,
});
const response = await client.chat.completions.create({
model: 'gpt-5.2',
messages: [{ role: 'user', content: 'Hello!' }],
});
`
Every request is logged to .vottur/logs.jsonl.
Drop-in replacement for the OpenAI SDK:
`typescript
import { createClient } from 'vottur';
const client = createClient({
apiKey: process.env.OPENAI_API_KEY,
});
// Same API as OpenAI SDK
const response = await client.chat.completions.create({
model: 'gpt-5.2',
messages: [{ role: 'user', content: 'Hello!' }],
});
`
Works with any OpenAI-compatible endpoint:
`typescript
// OpenRouter via OpenAI SDK
const client = createClient({
apiKey: process.env.OPENROUTER_API_KEY,
baseURL: 'https://openrouter.ai/api/v1',
});
// Azure OpenAI
const client = createClient({
apiKey: process.env.AZURE_OPENAI_API_KEY,
baseURL: 'https://your-resource.openai.azure.com/openai/deployments/gpt-5.2',
});
// Local (Ollama, LM Studio)
const client = createClient({
apiKey: 'ollama',
baseURL: 'http://localhost:11434/v1',
});
`
For the @openrouter/sdk with native features:
`bash`
npm install vottur @openrouter/sdk
`typescript
import { createClient } from 'vottur/openrouter';
const client = createClient({
apiKey: process.env.OPENROUTER_API_KEY,
siteUrl: 'https://myapp.com',
siteName: 'My App',
});
const response = await client.chat.send({
model: 'openai/gpt-5.2',
messages: [{ role: 'user', content: 'Hello!' }],
});
`
Direct fetch wrapper for any LLM API:
`typescript
import { createVottur } from 'vottur/fetch';
const vottur = createVottur({
logPath: '.vottur/logs.jsonl',
sessionId: 'my-session',
});
const response = await vottur('https://api.openai.com/v1/chat/completions', {
method: 'POST',
headers: {
'Authorization': Bearer ${process.env.OPENAI_API_KEY},`
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: 'gpt-5.2',
messages: [{ role: 'user', content: 'Hello!' }],
}),
});
Or use the default instance:
`typescript
import { vottur } from 'vottur/fetch';
// Uses default config
const response = await vottur('https://api.openai.com/v1/chat/completions', init);
`
Fetch mode supports streaming:
`typescriptBearer ${apiKey}
const response = await vottur('https://api.openai.com/v1/chat/completions', {
method: 'POST',
headers: { 'Authorization': , 'Content-Type': 'application/json' },
body: JSON.stringify({
model: 'gpt-5.2',
messages: [{ role: 'user', content: 'Tell me a story' }],
stream: true,
}),
});
// Stream is automatically logged when consumed
const reader = response.body?.getReader();
// ... read chunks
`
Fetch mode also exposes trace_id for agent hierarchy tracking:
`typescript
const response = await vottur('https://api.openai.com/v1/chat/completions', {
method: 'POST',
headers: { ... },
body: JSON.stringify({
model: 'gpt-5.2',
messages: [...],
_name: 'orchestrator',
}),
});
// Get trace_id to pass to child agents
const traceId = (response as any).trace_id;
// Child request links back to parent
await vottur('...', {
body: JSON.stringify({
...
_spawnedBy: traceId,
}),
});
`
Works identically to the underlying SDK:
`typescript
const stream = await client.chat.completions.create({
model: 'gpt-5.2',
messages: [{ role: 'user', content: 'Tell me a story' }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || '');
}
`
Logs are written after the stream completes, with accumulated content and token usage.
`typescript`
const response = await client.chat.completions.create({
model: 'gpt-5.2',
messages: [{ role: 'user', content: 'Weather in Tokyo?' }],
tools: [{
type: 'function',
function: {
name: 'get_weather',
parameters: {
type: 'object',
properties: { location: { type: 'string' } },
required: ['location'],
},
},
}],
});
`typescript
createClient({
// OpenAI SDK options
apiKey: string,
baseURL?: string,
// Vottur options
logPath?: string, // Default: .vottur/logs.jsonl
sessionId?: string, // Group related requests
disabled?: boolean, // Disable logging entirely
onLog?: (entry) => void, // Callback for each log entry
logRawData?: boolean, // Include raw request/response
logRawChunks?: boolean, // Include raw streaming chunks
});
`
A unique session ID (sess_) is generated once when you call createClient(). All requests made with that client share the same session - perfect for grouping an entire agentic workflow.
`typescript`
// Each createClient() call = new session
const client = createClient({ apiKey: '...' });
// All requests below share the same session
await client.chat.completions.create({ ... }); // sess_abc...
await client.chat.completions.create({ ... }); // sess_abc... (same)
await client.chat.completions.create({ ... }); // sess_abc... (same)
For multi-agent systems, pass the same client to all agents in a single run. They'll all share the same session, making it easy to trace the entire workflow:
`typescript
async function runWorkflow(task: string) {
const client = createClient({ apiKey: '...' }); // One session for the whole run
// All agents share the same client = same session
const planner = new PlannerAgent(client);
const executor = new ExecutorAgent(client);
const reviewer = new ReviewerAgent(client);
const plan = await planner.plan(task); // sess_abc...
const result = await executor.run(plan); // sess_abc... (same session)
await reviewer.review(result); // sess_abc... (same session)
}
await runWorkflow("task 1"); // sess_abc... (all agents)
await runWorkflow("task 2"); // sess_def... (new run = new session)
`
To manually control sessions:
`typescript`
client._vottur.newSession(); // Start a new session mid-run
client._vottur.setSessionId('custom-id'); // Use your own ID
client._vottur.getSessionId(); // Get current session ID
`typescript`
client._vottur.getSessionId(); // Current session ID
client._vottur.newSession(); // Start new session, returns new ID
client._vottur.setSessionId(id); // Set custom session ID
client._vottur.flush(); // Flush pending writes to disk
client._vottur.close(); // Close and flush
client._vottur.getLogPath(); // Get log file path
client._vottur.getLastLogEntry(); // Get most recent log entry
Use _name to label requests for easier log analysis. This helps identify which part of your system made each call:
`typescript
// Label requests to identify their purpose
await client.chat.completions.create({
model: 'gpt-5.2',
messages: [...],
_name: 'planning-step', // Shows up in logs as "name": "planning-step"
});
await client.chat.completions.create({
model: 'gpt-5.2',
messages: [...],
_name: 'code-review',
});
await client.chat.completions.create({
model: 'gpt-5.2',
messages: [...],
_name: 'final-summary',
});
`
The _name field is stripped before sending to the API - it's only used for logging.
For multi-agent systems, track parent-child relationships with _spawnedBy. Vottur exposes trace_id on every response, available immediately when the response/stream is returned (no need to wait for completion):
`typescript
// Root orchestrator - no parent
const orchestratorResponse = await client.chat.completions.create({
model: 'gpt-5.2',
messages: [{ role: 'user', content: 'Plan the task' }],
_name: 'orchestrator',
});
// Vottur exposes trace_id on the response!
const orchestratorTraceId = (orchestratorResponse as any).trace_id;
// Child agent - spawned by orchestrator
const workerResponse = await client.chat.completions.create({
model: 'gpt-5.2',
messages: [{ role: 'user', content: 'Execute subtask' }],
_name: 'worker',
_spawnedBy: orchestratorTraceId, // Links to parent
});
// Get worker's trace_id for further children
const workerTraceId = (workerResponse as any).trace_id;
// Grandchild agent - spawned by worker
await client.chat.completions.create({
model: 'gpt-5.2',
messages: [{ role: 'user', content: 'Write code' }],
_name: 'code-writer',
_spawnedBy: workerTraceId, // Links to worker
});
`
This creates a hierarchy in your logs where spawned_by matches trace_id:
`json`
{"trace_id": "tr_abc...", "name": "orchestrator", "spawned_by": null}
{"trace_id": "tr_def...", "name": "worker", "spawned_by": "tr_abc..."}
{"trace_id": "tr_ghi...", "name": "code-writer", "spawned_by": "tr_def..."}
Works with any depth of nesting - each agent just passes its response.trace_id to children. Combined with session_id, you get a complete picture: session groups the entire workflow, spawned_by shows the call hierarchy within it.
Vottur automatically detects large content and adds warnings to log entries:
- Message > 50KB: "Message 0 content is large (52KB)""Total input is large (128KB)"
- Total input > 100KB: "Output content is large (64KB)"
- Output > 50KB:
Warnings appear in the warnings array in log entries. This helps identify requests that might be consuming excessive tokens or hitting context limits.
Each line in the JSONL file:
`json`
{
"trace_id": "tr_a1b2c3d4-...",
"session_id": "sess_e5f6g7h8-...",
"timestamp": "2025-12-16T23:15:17.858Z",
"latency_ms": 677,
"model": "gpt-5.2",
"name": "planning-step",
"spawned_by": "tr_parent-...",
"input": {
"messages": [{ "role": "user", "content": "Hello!" }],
"tools": [...],
"temperature": 0.7
},
"output": {
"content": "Hi there!",
"tool_calls": [...],
"finish_reason": "stop"
},
"usage": {
"prompt_tokens": 14,
"completion_tokens": 8,
"total_tokens": 22,
"cost": 0.0001
},
"streaming": false,
"warnings": ["Message 0 content is large (52KB)"]
}
Note: name, spawned_by, and warnings are optional fields that appear only when set.
Provider-specific fields (like reasoning_details, cost) are preserved.
Vottur is a transparent proxy. It wraps SDK/fetch calls without modifying anything:
``
Your Code Vottur API
│ │ │
│ request │ │
│ ────────────────────────►│ │
│ │ request (unchanged) │
│ │───────────────────────────────►│
│ │ │
│ │◄───────────────────────────────│
│ │ response │
│ │ │
│ │ [logs to JSONL in background] │
│ │ │
│◄─────────────────────────│ │
│ response (unchanged) │ │
- Requests pass through unchanged to the underlying SDK/fetch
- Responses return unchanged to your code
- Logging is fire-and-forget (never awaited, zero latency impact)
- All provider-specific fields are preserved
- Streaming chunks flow through unchanged, captured for logging
- Errors are logged and re-thrown unchanged
`bash``
npx vottur init # Set up .vottur/ in your project
npx vottur analyze # Show logs location