Estimate USD costs for Large Language Model API calls with dynamic pricing fetching
npm install llm-cost-estimatorEstimate USD costs for Large Language Model (LLM) API calls with dynamic pricing fetching and accurate token counting.
---
``bash`
npm install -g llm-cost-estimator
`bash`
llm-cost "Your prompt here" --model gpt-4 --output 100
Output:
``
$0.0061
---
Comprehensive documentation is available in the GitHub repository:
- 🔧 Installation & Setup - Installation instructions, global setup, and configuration options
- 💻 CLI Usage - Complete guide to using the command-line interface, all options, and examples
- 📦 Programmatic API - How to use the package in your TypeScript/JavaScript projects
- 🤖 Supported Models - Complete list of all supported models across OpenAI, Anthropic, Gemini, and Grok
- 🔢 Token Counting - Understanding how tokens are counted, tiktoken vs fallback, and accuracy
- 💰 Pricing Data - How pricing is fetched, cached, and updated dynamically
- 🏗️ Project Structure - File organization, directory layout, and code architecture
- 🔌 Adding Providers - Guide to extending the package with new LLM providers
- ⚙️ Development - Building, testing, and contributing to the project
---
- ✅ Accurate Token Counting - Uses @dqbd/tiktoken` for OpenAI models, with fallback estimation for others
- ✅ Dynamic Pricing - Fetches up-to-date pricing from provider websites at runtime
- ✅ Multiple Providers - Supports OpenAI, Anthropic (Claude), Google Gemini, and Grok (xAI)
- ✅ Smart Caching - In-memory cache with 24-hour TTL to minimize API calls
- ✅ CLI Tool - Easy-to-use command-line interface with minimal output
- ✅ TypeScript - Fully typed with strict TypeScript support
- ✅ Modular Design - Easy to extend with new providers
---
MIT
Contributions are welcome! Please feel free to submit a Pull Request.