Lightweight SDK to track AI usage and cost in your SaaS product.
npm install @tokentracker/ai-token-trackerTrack your AI usage with two lines of code: one at the top, one at the bottom.
OpenAI example:
``ts
import OpenAI from "openai";
import { trackOpenAIChat } from "@tokentracker/ai-token-tracker/client";
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
// Wrap the OpenAI client with TokenTrackr’s middleware adapter
const client = trackOpenAIChat(openai);
// Send a simple prompt — TokenTrackr automatically tracks it
const response = await client.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Hello" }],
});
console.log(response.choices[0].message.content);
`
Available adapters (same pattern):
- OpenAI: trackOpenAIChat(openai)trackAnthropicMessages(anthropic)
- Anthropic: trackGoogleGenerativeModel(model, { modelName })
- Google Generative AI: trackMistralChat(mistral)
- Mistral: trackGroqChat(groq)
- Groq: trackAzureOpenAIChat(azureOpenAI)
- Azure OpenAI: trackCohereChat(cohere)
- Cohere: trackBedrockConverse(bedrock)
- AWS Bedrock:
Images and audio adapters:
- OpenAI Images: trackOpenAIImages(openai).generate({ model, prompt })trackOpenAIAudioSpeech(openai).create({ model, input })
- OpenAI Audio (TTS): trackOpenAIAudioTranscriptions(openai).create({ model, file })
- OpenAI Audio (Transcriptions): trackOpenAIAudioTranslations(openai).create({ model, file })
- OpenAI Audio (Translations): trackAzureOpenAIImages(azureOpenAI).generate({ model, prompt })
- Azure OpenAI Images: trackAzureOpenAIAudioSpeech(azureOpenAI).create({ model, input })
- Azure OpenAI Audio (TTS): trackAzureOpenAIAudioTranscriptions(azureOpenAI).create({ model, file })
- Azure OpenAI Audio (Transcriptions): trackAzureOpenAIAudioTranslations(azureOpenAI).create({ model, file })
- Azure OpenAI Audio (Translations): trackGoogleImages(model, { modelName }).generate({ contents })
- Google Images: trackBedrockImages(bedrock).generate({ modelId, messages })
- AWS Bedrock Images: trackBedrockAudioTranscriptions(bedrock).create({ modelId, messages })
- AWS Bedrock Audio (Transcriptions): trackBedrockAudioTranslations(bedrock).create({ modelId, messages })
- AWS Bedrock Audio (Translations): trackBedrockAudioSpeech(bedrock).create({ modelId, messages })
- AWS Bedrock Audio (TTS):
OpenAI:
`ts
import OpenAI from 'openai';
import { trackOpenAIChat } from '@tokentracker/ai-token-tracker/client';
const openai = new OpenAI({ apiKey: process.env.OPENAI_KEY });
const client = trackOpenAIChat(openai);
const res = await client.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Tell me a joke' }],
});
`
Anthropic:
`ts
import Anthropic from '@anthropic-ai/sdk';
import { trackAnthropicMessages } from '@tokentracker/ai-token-tracker/client';
const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_KEY });
const client = trackAnthropicMessages(anthropic);
const res = await client.create({ model: 'claude-3-5-sonnet', messages: [{ role: 'user', content: 'Summarize this' }] });
`
Google Generative AI:
`ts
import { GoogleGenerativeAI } from '@google/generative-ai';
import { trackGoogleGenerativeModel } from '@tokentracker/ai-token-tracker/client';
const genAI = new GoogleGenerativeAI(process.env.GOOGLE_API_KEY!);
const model = genAI.getGenerativeModel({ model: 'gemini-1.5-pro' });
const wrapped = trackGoogleGenerativeModel(model, { modelName: 'gemini-1.5-pro' });
const res = await wrapped.generateContent({ contents: [{ role: 'user', parts: [{ text: 'Write a poem' }] }] });
`
Mistral / Groq / Azure OpenAI (OpenAI-style):
`ts`
import { trackMistralChat, trackGroqChat, trackAzureOpenAIChat } from '@tokentracker/ai-token-tracker/client';
// const client = trackMistralChat(mistral) | trackGroqChat(groq) | trackAzureOpenAIChat(azureOpenAI);
Cohere:
`ts`
import { trackCohereChat } from '@tokentracker/ai-token-tracker/client';
// const client = trackCohereChat(cohere);
// await client.chat({ model: 'command-r', messages: [{ role: 'user', content: '...' }] });
AWS Bedrock (converse):
`ts`
import { trackBedrockConverse } from '@tokentracker/ai-token-tracker/client';
// const client = trackBedrockConverse(bedrock);
// await client.converse({ modelId: 'anthropic.claude-3-5-sonnet-20241022-v2:0', messages: [...] });
Images and audio examples
OpenAI Images:
`ts
import OpenAI from 'openai';
import { trackOpenAIImages } from '@tokentracker/ai-token-tracker/client';
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const images = trackOpenAIImages(openai);
const res = await images.generate({ model: 'gpt-image-1', prompt: 'A cat in space' });
`
OpenAI Audio (TTS):
`ts`
import { trackOpenAIAudioSpeech } from '@tokentracker/ai-token-tracker/client';
const audio = trackOpenAIAudioSpeech(openai);
const res = await audio.create({ model: 'gpt-4o-mini-tts', input: 'Hello' });
OpenAI Audio (Transcriptions):
`ts`
import { trackOpenAIAudioTranscriptions } from '@tokentracker/ai-token-tracker/client';
const transcriptions = trackOpenAIAudioTranscriptions(openai);
const res = await transcriptions.create({ model: 'whisper-1', file });
Azure OpenAI Images:
`ts`
import { trackAzureOpenAIImages } from '@tokentracker/ai-token-tracker/client';
const aimg = trackAzureOpenAIImages(azureOpenAI);
const res = await aimg.generate({ model: process.env.AZURE_OPENAI_IMAGE_DEPLOYMENT!, prompt: 'A cat in space' });
Google Images (via generateContent):
`ts`
import { GoogleGenerativeAI } from '@google/generative-ai';
import { trackGoogleImages } from '@tokentracker/ai-token-tracker/client';
const genAI = new GoogleGenerativeAI(process.env.GOOGLE_API_KEY!);
const model = genAI.getGenerativeModel({ model: 'gemini-1.5-pro' });
const wrapped = trackGoogleImages(model, { modelName: 'gemini-1.5-pro' });
const res = await wrapped.generate({ contents: [{ role: 'user', parts: [{ text: 'Generate an image of a cat in space' }] }] });
AWS Bedrock Images:
`ts`
import { trackBedrockImages } from '@tokentracker/ai-token-tracker/client';
const images = trackBedrockImages(bedrock);
const res = await images.generate({ modelId: process.env.BEDROCK_MODEL_ID!, messages: [{ role: 'user', content: 'Create an image of a cat in space' }] });
AWS Bedrock Audio (Transcriptions):
`ts`
import { trackBedrockAudioTranscriptions } from '@tokentracker/ai-token-tracker/client';
const trans = trackBedrockAudioTranscriptions(bedrock);
const res = await trans.create({ modelId: process.env.BEDROCK_MODEL_ID!, messages: [{ role: 'user', content: 'Transcribe attached audio' }] });
bash
npm install @tokentracker/ai-token-tracker
`2) Configure environment (real values)
`env
AI_TRACKER_ENDPOINT=https://tokentracker-7tu3.onrender.com/track
AI_TRACKER_API_KEY=
`
If using dotenv, load early in your app entry:
`ts
import 'dotenv/config';
`3) Add two lines around your AI call
`ts
import { beginTrack } from '@tokentracker/ai-token-tracker/client';// TOP: REQUIRED — capture provider, model, endpoint, and your real prompt
const done = beginTrack({
provider: 'your-provider', // REQUIRED e.g. 'openai', 'anthropic', 'google', 'mistral', ...
model: 'your-model', // REQUIRED
endpoint: 'chat.completions', // REQUIRED
prompt, // REQUIRED string OR [{ role, content }]
});
// ... your existing AI API call ...
// Example (OpenAI-style):
const res = await client.chat.completions.create({ model: 'gpt-4o', messages });
// BOTTOM: REQUIRED — pass what you want tracked (placeholders from response)
await done({
http_status: resStatus / e.g., transport status from your SDK /, // REQUIRED
input_tokens: res?.usage?.prompt_tokens, // REQUIRED (or provide total_tokens)
output_tokens: res?.usage?.completion_tokens, // REQUIRED (or provide total_tokens)
total_tokens: res?.usage?.total_tokens, // REQUIRED if you didn't set both input/output
response: res, // REQUIRED
// Extras you can also set:
// retry_count, response_size_bytes, latency_first_token_ms,
// temperature, max_tokens, error_type, error_message_snippet, etc.
});
`That’s it. The SDK posts JSON to
AI_TRACKER_ENDPOINT with your data. Provide as much as you have; missing fields are allowed. The SDK stamps timestamps and latency automatically and will infer success from http_status if provided.Note: If you do not configure an endpoint, the SDK defaults to
https://tokentracker-7tu3.onrender.com/track.$3
- Top: provider, model, endpoint (recommended; stored as-is)
- Prompt: prompt (string or messages array) — optional
- Bottom: http_status (optional), input_tokens/output_tokens and/or total_tokens (optional), response (optional)
- Extras (all optional): retry_count, response_size_bytes, latency_first_token_ms, temperature, max_tokens, error_type, error_message_snippet, etc.What the SDK adds automatically (no other inference):
-
timestamp_start (when beginTrack runs)
- timestamp_end (when done runs)
- latency_ms (end - start)$3
`ts
beginTrack({ provider: 'openai', model: '', endpoint: '', prompt })
beginTrack({ provider: 'anthropic', model: '', endpoint: '', prompt })
beginTrack({ provider: 'google', model: '', endpoint: '', prompt })
beginTrack({ provider: 'mistral', model: '', endpoint: '', prompt })
beginTrack({ provider: 'groq', model: '', endpoint: '', prompt })
beginTrack({ provider: 'cohere', model: '', endpoint: '', prompt })
beginTrack({ provider: 'azure-openai', model: '', endpoint: '', prompt })
beginTrack({ provider: 'aws-bedrock', model: '', endpoint: '', prompt })
`$3
- String prompt (OpenAI-style placeholders)
`ts
const done = beginTrack({ provider: 'openai', prompt: 'Write a haiku about the ocean' });
const res = await client.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Write a haiku about the ocean' }],
});
await done({
http_status: resStatus, // REQUIRED
input_tokens: res?.usage?.prompt_tokens,
output_tokens: res?.usage?.completion_tokens,
total_tokens: res?.usage?.total_tokens, // REQUIRED if you didn't set both input/output
response: res, // REQUIRED
});
`- Generic fetch (any provider)
`ts
const done = beginTrack({ provider: 'any-provider', model: 'your-model', endpoint: '/v1/whatever', prompt });
const resp = await fetch('https://provider.example.com/v1/whatever', {
method: 'POST',
headers: { 'content-type': 'application/json' },
body: JSON.stringify({ model: 'your-model', prompt }),
});
const http_status = resp.status;
const json = await resp.json();
await done({
http_status, // REQUIRED
// usage fields: REQUIRED (set input+output, or set total_tokens)
input_tokens: json?.usage?.input_tokens,
output_tokens: json?.usage?.output_tokens,
total_tokens: json?.usage?.total_tokens,
response: json, // REQUIRED
});
`- Messages prompt (Anthropic-style placeholders)
`ts
const messages = [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Summarize the following text: ...' },
];
const done = beginTrack({ provider: 'anthropic', model: 'claude-3-5-sonnet', endpoint: 'messages.create', prompt: messages });
const res = await anthropic.messages.create({ model: 'claude-3-5-sonnet', messages });
await done({
http_status: resStatus, // REQUIRED
input_tokens: res?.usage?.input_tokens,
output_tokens: res?.usage?.output_tokens,
response: res, // REQUIRED
});
`$3
You must explicitly provide every field you want tracked. The event supports:
- Identifiers and timing: request_id?, timestamp_start, timestamp_end?, latency_ms?
- Provider info: provider, model?, endpoint?
- Token usage: input_tokens?, output_tokens?, total_tokens?
- HTTP/result: http_status?, success?, retry_count?
- Errors: error_type?, error_message_snippet?
- Sizes/latency: response_size_bytes?, latency_first_token_ms?
- Content: prompt? (string or messages), response? (any), temperature?, max_tokens?$3
`ts
import { configureTracker } from '@tokentracker/ai-token-tracker/client';
configureTracker({ endpoint: 'https://tokentracker-7tu3.onrender.com/track', apiKey: '' });
`$3
- Node >= 18.17 (uses global fetch)Module usage (ESM and CommonJS)
Use the
client entry for convenience. Both ESM and CommonJS are supported.- ESM (Node ESM / bundlers):
`ts
import { beginTrack, configureTracker } from '@tokentracker/ai-token-tracker/client';
`- CommonJS (require):
`js
const { beginTrack, configureTracker } = require('@tokentracker/ai-token-tracker/client');
`You can also import from the root package if you prefer:
- ESM:
`ts
import { beginTrack, configureTracker } from '@tokentracker/ai-token-tracker';
`- CommonJS:
`js
const { beginTrack, configureTracker } = require('@tokentracker/ai-token-tracker');
`$3
If you need to redact sensitive content, scrub it before passing prompt/response into beginTrack/done.$3
- We will fully rewrite and streamline this README after all features are implemented.
- Keep wrapper examples concise; expand provider-specific guidance later.
- Ensure no mock data or fake endpoints are shown; only real shapes and notes.Server configuration (env vars)
Set these on your Render service (server is not published to npm):- SUPABASE_URL: Your Supabase project URL
- SUPABASE_SERVICE_ROLE_KEY: Supabase service role key
- SUPABASE_API_KEYS_TABLE: Table storing API keys (default: api_keys)
- SUPABASE_API_KEYS_TOKEN_COLUMN: Column holding the key/token (default: key)
- SUPABASE_API_KEYS_USER_COLUMN: Column with the user id (default: user_id)
- SUPABASE_API_KEYS_REVOKED_COLUMN: Column indicating revocation (default: revoked, server enforces revoked=false)
- SUPABASE_AI_EVENTS_TABLE: Destination table for events (default: ai_events)
- MAX_BODY_BYTES: Max request body size in bytes (default: 2097152)
- PORT: Server port (default: 8080)
License
- SDK (
src/ and the published npm package @tokentracker/ai-token-tracker): MIT. See LICENSE.
- Server (server/): Proprietary. See server/LICENSE`. The server code is not open-source and may not be copied, modified, or redistributed without a commercial license.For commercial use of the server as part of the ai-token-tracker service, contact the copyright holder for licensing terms.
#