Token usage tracker for OpenAI, Claude, and Gemini APIs with MCP support - Updated 2025 pricing
npm install llm-token-trackerToken usage tracker for OpenAI, Claude, and Gemini APIs with MCP (Model Context Protocol) support. Pass accurate API costs to your users.


- šÆ Simple Integration - One line to wrap your API client
- š Automatic Tracking - No manual token counting
- š° Accurate Pricing - Up-to-date pricing for all models (2025)
- š Multiple Providers - OpenAI, Claude, and Gemini support
- š User Management - Track usage per user/session
- š Currency Support - USD and KRW
- š¤ MCP Server - Use directly in Claude Desktop!
- š Intuitive Session Tracking - Real-time usage with progress bars
``bash`
npm install llm-token-tracker
`javascript
const { TokenTracker } = require('llm-token-tracker');
// or import { TokenTracker } from 'llm-token-tracker';
// Initialize tracker
const tracker = new TokenTracker({
currency: 'USD' // or 'KRW'
});
// Example: Manual tracking
const trackingId = tracker.startTracking('user-123');
// ... your API call here ...
tracker.endTracking(trackingId, {
provider: 'openai', // or 'anthropic' or 'gemini'
model: 'gpt-3.5-turbo',
inputTokens: 100,
outputTokens: 50,
totalTokens: 150
});
// Get user's usage
const usage = tracker.getUserUsage('user-123');
console.log(Total cost: $${usage.totalCost});`
To use with actual OpenAI/Anthropic APIs:
`javascript
const OpenAI = require('openai');
const { TokenTracker } = require('llm-token-tracker');
const tracker = new TokenTracker();
const openai = tracker.wrap(new OpenAI({
apiKey: process.env.OPENAI_API_KEY
}));
// Use normally - tracking happens automatically
const response = await openai.chat.completions.create({
model: "gpt-3.5-turbo",
messages: [{ role: "user", content: "Hello!" }]
});
console.log(response._tokenUsage);
// { tokens: 125, cost: 0.0002, model: "gpt-3.5-turbo" }
`
Add to Claude Desktop settings (~/Library/Application Support/Claude/claude_desktop_config.json):
`json`
{
"mcpServers": {
"token-tracker": {
"command": "npx",
"args": ["llm-token-tracker"]
}
}
}
Then in Claude:
- "Calculate current session usage" - See current session usage with intuitive format
- "Calculate current conversation cost" - Get cost breakdown with input/output tokens
- "Track my API usage"
- "Compare costs between GPT-4 and Claude"
- "Show my total spending today"
#### Available MCP Tools
1. get_current_session - š Get current session usage (RECOMMENDED)
- Returns: Used/Remaining tokens, Input/Output breakdown, Cost, Progress bar
- Default user_id: current-session
- Default budget: 190,000 tokens
- Perfect for real-time conversation tracking!
2. track_usage - Track token usage for an AI API call
- Parameters: provider, model, input_tokens, output_tokens, user_id
3. get_usage - Get usage summary for specific user or all users
4. compare_costs - Compare costs between different models
5. clear_usage - Clear usage data for a user
#### Example MCP Output
`
š° Current Session
āāāāāāāāāāāāāāāāāāāāāā
š Used: 62,830 tokens (33.1%)
⨠Remaining: 127,170 tokens
[āāāāāāāāāāāāāāāāāāāā]
š„ Input: 55,000 tokens
š¤ Output: 7,830 tokens
šµ Cost: $0.2825
āāāāāāāāāāāāāāāāāāāāāā
š Model Breakdown:
⢠anthropic/claude-sonnet-4.5: 62,830 tokens ($0.2825)
`
Note: Prices shown are per 1,000 tokens. Batch API offers 50% discount. Prompt caching can reduce costs by up to 90%.
Run the example:
`bash`
npm run example
Check examples/basic-usage.js for detailed usage patterns.
: 'USD' or 'KRW' (default: 'USD')
- config.webhookUrl: Optional webhook for usage notifications$3
Wrap an OpenAI or Anthropic client for automatic tracking.$3
Create a user-specific tracker instance.$3
Start manual tracking session. Returns tracking ID.$3
End tracking and record usage.$3
Get total usage for a user.$3
Get usage summary for all users.š Development
`bash
Install dependencies
npm installBuild TypeScript
npm run buildWatch mode
npm run devRun examples
npm run example
`š License
MIT
š¤ Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
š Issues
For bugs and feature requests, please create an issue.
š¦ What's New in v2.4.0
- š Gemini API Support - Full integration with Google's Gemini models
- š Gemini 2.0 Support - Free preview models included
- š Enhanced Pricing - Up-to-date Gemini 1.5 and 2.0 pricing
- š§ Auto-detection - Automatic Gemini client wrapping
- š° Cost Comparison - Compare Gemini with OpenAI and Claude
š¦ What's New in v2.3.0
- š± Real-time exchange rates - Automatic USD to KRW conversion
- š Uses exchangerate-api.com for accurate rates
- š¾ 24-hour caching to minimize API calls
- š New
get_exchange_rate tool to check current rates
- š Background auto-updates with fallback supportWhat's New in v2.2.0
- šļø File-based persistence - Session data survives server restarts
- š¾ Automatic saving to
~/.llm-token-tracker/sessions.json
- š Works for both npm and local installations
- š Historical data tracking across sessions
- šÆ Zero configuration required - just works!What's New in v2.1.0
- š Added
get_current_session tool for intuitive session tracking
- š Real-time progress bars and visual indicators
- š° Enhanced cost breakdown with input/output token separation
- šØ Improved formatting with thousands separators
- š§ Better default user_id handling (current-session`)---
Built with ā¤ļø for developers who need transparent AI API billing.