AI services library with multi-provider LLM support (OpenAI, Anthropic, Gemini), text-to-speech, and RAG utilities. Includes token counting, cost tracking, and streaming support.
npm install @egintegrations/ai-servicesAI services library with multi-provider LLM support (OpenAI, Anthropic, Gemini), text-to-speech interfaces, and RAG utilities. Includes token counting, cost tracking, and streaming support.
``bash`
npm install @egintegrations/ai-services
Install the AI providers you need:
`bashFor OpenAI
npm install openai
Features
- Multi-Provider LLM Support: OpenAI, Anthropic Claude, Google Gemini
- Unified Interface: Single API for all LLM providers
- Streaming: Async iterators for streaming responses
- Token Management: Count tokens and estimate costs
- RAG (Retrieval Augmented Generation): Built-in vector search and context augmentation
- Embeddings: Generate embeddings for semantic search (OpenAI)
- Factory Pattern: Easy provider switching
- TypeScript: Full type safety
Quick Start
$3
`typescript
import { LLMFactory } from '@egintegrations/ai-services';// Create provider from environment variables
// Requires OPENAI_API_KEY, ANTHROPIC_API_KEY, or GOOGLE_API_KEY
const provider = LLMFactory.createFromEnv('openai');
// Or create with explicit config
const provider = LLMFactory.createProvider('anthropic', {
apiKey: 'your-api-key',
model: 'claude-3-5-sonnet-20241022',
temperature: 0.7,
maxTokens: 4096,
});
// Generate completion
const response = await provider.complete([
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'What is TypeScript?' },
]);
console.log(response.content);
console.log(
Cost: $${provider.calculateCost(response.usage)});
`$3
`typescript
import { OpenAIAdapter, AnthropicAdapter, GeminiAdapter } from '@egintegrations/ai-services';// OpenAI
const openai = new OpenAIAdapter({
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-4o',
});
// Anthropic
const claude = new AnthropicAdapter({
apiKey: process.env.ANTHROPIC_API_KEY!,
model: 'claude-3-5-sonnet-20241022',
});
// Google Gemini
const gemini = new GeminiAdapter({
apiKey: process.env.GOOGLE_API_KEY!,
model: 'gemini-2.0-flash-exp',
});
`$3
`typescript
const messages = [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Explain quantum computing' },
];for await (const chunk of provider.streamComplete(messages)) {
process.stdout.write(chunk);
}
`$3
`typescript
const openai = new OpenAIAdapter({ apiKey: process.env.OPENAI_API_KEY! });const embedding = await openai.generateEmbedding('Hello world');
console.log(embedding); // 1536-dimensional vector
`$3
`typescript
import {
countTokens,
estimateCost,
estimateMessageTokens,
truncateToTokens,
splitIntoChunks,
} from '@egintegrations/ai-services';// Count tokens in text
const tokens = countTokens('This is a test message');
// Estimate cost
const cost = estimateCost('openai', 'gpt-4', 1000, 500);
// Estimate tokens for messages
const messageTokens = estimateMessageTokens([
{ role: 'user', content: 'Hello' },
{ role: 'assistant', content: 'Hi there!' },
]);
// Truncate text to token limit
const truncated = truncateToTokens(longText, 1000);
// Split into chunks
const chunks = splitIntoChunks(longText, 2000);
`$3
`typescript
import { RAGService, OpenAIAdapter } from '@egintegrations/ai-services';// Create embedding provider (uses OpenAI by default)
const openai = new OpenAIAdapter({ apiKey: process.env.OPENAI_API_KEY! });
// Create RAG service
const rag = new RAGService({
embeddingProvider: {
generateEmbedding: (text) => openai.generateEmbedding(text),
},
});
// Add documents
await rag.addDocument({
id: '1',
content: 'TypeScript is a typed superset of JavaScript.',
metadata: { source: 'docs' },
});
await rag.addDocument({
id: '2',
content: 'React is a library for building user interfaces.',
metadata: { source: 'docs' },
});
// Search for relevant documents
const results = await rag.search({
query: 'What is TypeScript?',
topK: 3,
threshold: 0.7,
});
console.log(results); // [{ document, score }]
// Augment prompt with context
const { prompt, sources } = await rag.augmentPrompt(
'Explain TypeScript',
'You are a helpful assistant.'
);
// Use augmented prompt with LLM
const response = await openai.complete([
{ role: 'system', content: prompt },
]);
`API Reference
$3
All providers extend
LLMProvider and implement:#### Methods
-
complete(messages: LLMMessage[]): Promise - Generate completion
- streamComplete(messages: LLMMessage[]): AsyncIterableIterator - Stream completion
- countTokens(text: string): Promise - Count tokens
- calculateCost(usage: TokenUsage): number - Calculate cost
- listModels(): string[] - List available models
- getProviderName(): string - Get provider name
- getDefaultModel(): string - Get default model
- generateEmbedding(text: string): Promise - Generate embeddings (OpenAI only)#### Configuration
`typescript
interface LLMConfig {
apiKey: string;
model?: string;
maxTokens?: number;
temperature?: number;
topP?: number;
presencePenalty?: number;
frequencyPenalty?: number;
stopSequences?: string[];
}
`$3
-
createProvider(provider: ProviderType, config: LLMConfig): LLMProvider
- createFromEnv(provider: ProviderType): LLMProvider
- listProviders(): ProviderType[]$3
-
addDocument(document: Omit
- search(query: RAGQuery): Promise
- generateContext(results: RAGResult[]): string
- augmentPrompt(query: string, systemPrompt?: string): Promise<{ prompt: string; sources: RAGResult[] }>$3
-
countTokens(text: string): number
- estimateCost(provider: string, model: string, promptTokens: number, completionTokens: number): number
- estimateMessageTokens(messages: Array<{ role: string; content: string }>): number
- truncateToTokens(text: string, maxTokens: number): string
- splitIntoChunks(text: string, maxTokensPerChunk: number): string[]Supported Models
$3
-
gpt-4-turbo-preview
- gpt-4
- gpt-4o
- gpt-3.5-turbo$3
-
claude-3-5-sonnet-20241022 (default)
- claude-3-5-haiku-20241022
- claude-3-opus-20240229$3
-
gemini-2.0-flash-exp (default, free during preview)
- gemini-1.5-pro
- gemini-1.5-flashEnvironment Variables
-
OPENAI_API_KEY - OpenAI API key
- ANTHROPIC_API_KEY - Anthropic API key
- GOOGLE_API_KEY - Google API key
- LLM_MODEL - Default model (optional)
- LLM_MAX_TOKENS - Default max tokens (optional, default: 4096)
- LLM_TEMPERATURE - Default temperature (optional, default: 0.7)Cost Tracking
All providers include cost estimation based on current pricing (as of January 2026):
`typescript
const response = await provider.complete(messages);
const cost = provider.calculateCost(response.usage);
console.log(Request cost: $${cost.toFixed(4)});
`Error Handling
`typescript
try {
const response = await provider.complete(messages);
console.log(response.content);
} catch (error) {
if (error.message.includes('API key')) {
console.error('Invalid API key');
} else if (error.message.includes('rate limit')) {
console.error('Rate limit exceeded');
} else {
console.error('Unknown error:', error);
}
}
``MIT
Extracted from egi-botnet (LLM adapters), weathernet (Gemini integration), and og-literacy-mvp (RAG patterns).
This package is maintained by EGI Integrations. For bugs or feature requests, please open an issue on the egi-comp-library repository.