Unified LLM Service for Content Growth
npm install @contentgrowth/llm-serviceUnified LLM Service for Content Growth applications. This package provides a standardized interface for interacting with various LLM providers (OpenAI, Gemini) and supports "Bring Your Own Key" (BYOK) functionality via pluggable configuration.
``bash`
npm install @contentgrowth/llm-service
The service requires an environment object (usually from Cloudflare Workers) to access bindings.
`javascript
import { LLMService } from '@contentgrowth/llm-service';
// In your Worker
export default {
async fetch(request, env, ctx) {
const llmService = new LLMService(env);
// Chat
const response = await llmService.chat('Hello, how are you?', 'tenant-id');
console.log(response.text);
// Chat Completion (with system prompt)
const result = await llmService.chatCompletion(
[{ role: 'user', content: 'Write a poem' }],
'tenant-id',
'You are a poetic assistant'
);
console.log(result.content);
}
}
`
The service uses a ConfigManager to determine which LLM provider and API key to use for a given tenant.
#### Default Behavior (Cloudflare KV + Durable Objects)
By default, the service expects the env object passed to the constructor to contain:TENANT_LLM_CONFIG
- : A KV Namespace binding.TENANT_DO
- : A Durable Object Namespace binding.
It uses these to fetch tenant-specific configurations.
#### Custom Configuration (Pluggable Providers)
If your project stores tenant keys differently (e.g., in a SQL database, environment variables, or a different service), you can implement a custom ConfigProvider.
`javascript
import { LLMService, ConfigManager, BaseConfigProvider } from '@contentgrowth/llm-service';
// 1. Define your custom provider
class MyDatabaseConfigProvider extends BaseConfigProvider {
async getConfig(tenantId, env) {
// Fetch config from your database or other source
// You can use 'env' here if you need access to bindings
const apiKey = await getApiKeyFromDB(tenantId);
return {
provider: 'openai', // or 'gemini'
apiKey: apiKey,
models: {
default: 'gpt-4o',
// ... optional overrides
},
// Optional capabilities
capabilities: { chat: true, image: true }
};
}
}
// 2. Register the provider at application startup
ConfigManager.setConfigProvider(new MyDatabaseConfigProvider());
// 3. Use LLMService as normal - it will now use your provider
const service = new LLMService(env);
`
The service supports native JSON mode for OpenAI and Gemini, guaranteeing valid JSON responses without escaping issues.
#### Basic JSON Mode
`javascript
const response = await llmService.chatCompletion(
messages,
tenantId,
'You are a helpful assistant. Always respond in JSON.',
{ responseFormat: 'json' } // ← Enable JSON mode
);
// Response includes auto-parsed JSON
console.log(response.parsedContent); // Already parsed object
console.log(response.content); // Raw JSON string
`
#### JSON Mode with Schema Validation (Structured Outputs)
Define a schema to guarantee the response structure:
`javascript
const schema = {
type: 'object',
properties: {
answer: { type: 'string' },
confidence: { type: 'number' },
sources: {
type: 'array',
items: { type: 'string' },
nullable: true
}
},
required: ['answer', 'confidence']
};
const response = await llmService.chatCompletion(
messages,
tenantId,
systemPrompt,
{
responseFormat: 'json_schema',
responseSchema: schema,
schemaName: 'question_answer'
}
);
// Guaranteed to match schema
const { answer, confidence, sources } = response.parsedContent;
`
#### Convenience Method
For JSON-only responses, use chatCompletionJson() to get parsed objects directly:
`javascript
// Returns parsed object directly (not response wrapper)
const data = await llmService.chatCompletionJson(
messages,
tenantId,
systemPrompt,
schema // optional
);
console.log(data.answer); // Direct access to fields
console.log(data.confidence); // No .parsedContent needed
`
#### Flexible Call Signatures
The chatCompletion() method intelligently detects whether you're passing tools, options, or both:
`javascript`
// All these work!
await chatCompletion(messages, tenant, prompt);
await chatCompletion(messages, tenant, prompt, tools);
await chatCompletion(messages, tenant, prompt, { responseFormat: 'json' });
await chatCompletion(messages, tenant, prompt, tools, { responseFormat: 'json' });
#### Supported Options
- responseFormat: 'text' (default), 'json', or 'json_schema'responseSchema
- : JSON schema object (required for json_schema mode)schemaName
- : Name for the schema (optional, for json_schema mode)strictSchema
- : Enforce strict validation (default: true)autoParse
- : Auto-parse JSON responses (default: true)temperature
- : Override temperaturemaxTokens
- : Override max tokenstier
- : Model tier ('default', 'fast', 'smart')
1. Create .env file (copy from .env.example):`
bash`
cp .env.example .env
2. Add your API keys to .env:`
ini`
LLM_PROVIDER=openai # or gemini
OPENAI_API_KEY=sk-your-key-here
GEMINI_API_KEY=your-gemini-key-here
3. Run tests:
`bash`
npm run test:json # Run comprehensive test suite
npm run examples:json # Run interactive examples
See TESTING.md for detailed testing documentation.
To publish this package to NPM:
1. Update Version:
Update the version in package.json.
2. Login to NPM:
`bash`
npm login
3. Publish:
`bash`
# For public access
npm publish --access public
: Main service class.
- src/llm/config-manager.js: Configuration resolution logic.
- src/llm/config-provider.js: Abstract provider interfaces.
- src/llm/providers/: Individual LLM provider implementations.$3
Run the local test script to verify imports and configuration:
`bash
node test-custom-config.js
``