Utility functions for Quilltap plugin development
npm install @quilltap/plugin-utilsUtility functions for Quilltap plugin development. This package provides runtime utilities that complement the type definitions in @quilltap/plugin-types.
``bash`
npm install @quilltap/plugin-utils @quilltap/plugin-types
Parse tool calls from any LLM provider's response format into a standardized ToolCallRequest[]:
`typescript
import { parseToolCalls, parseOpenAIToolCalls } from '@quilltap/plugin-utils';
// Auto-detect format
const toolCalls = parseToolCalls(response, 'auto');
// Or use provider-specific parsers
const openaiCalls = parseOpenAIToolCalls(response);
const anthropicCalls = parseAnthropicToolCalls(response);
const googleCalls = parseGoogleToolCalls(response);
`
Convert between OpenAI, Anthropic, and Google tool formats:
`typescript
import {
convertToAnthropicFormat,
convertToGoogleFormat,
convertToolsTo
} from '@quilltap/plugin-utils';
// Convert a single tool
const anthropicTool = convertToAnthropicFormat(universalTool);
const googleTool = convertToGoogleFormat(universalTool);
// Convert multiple tools
const anthropicTools = convertToolsTo(tools, 'anthropic');
`
Create a logger that integrates with Quilltap's core logging system when running inside the host application, or falls back to console logging when running standalone:
`typescript
import { createPluginLogger } from '@quilltap/plugin-utils';
// Create a logger for your plugin
const logger = createPluginLogger('qtap-plugin-my-provider');
// Use it like any standard logger
logger.debug('Initializing provider', { version: '1.0.0' });
logger.info('Provider ready');
logger.warn('Rate limit approaching', { remaining: 10 });
logger.error('API call failed', { endpoint: '/chat' }, error);
// Create child loggers with additional context
const childLogger = logger.child({ component: 'auth' });
childLogger.info('Validating API key');
`
When running inside Quilltap:
- Logs are routed to Quilltap's core logging system
- Logs appear in logs/combined.log and console{ plugin: 'your-plugin-name', module: 'plugin' }
- Each log is tagged with
When running standalone:
- Logs are written to console with [plugin-name] prefixLOG_LEVEL
- Respects or QUILLTAP_LOG_LEVEL environment variables
| Function | Description |
|----------|-------------|
| parseToolCalls(response, format) | Parse tool calls with auto-detection or explicit format |parseOpenAIToolCalls(response)
| | Parse OpenAI/Grok format tool calls |parseAnthropicToolCalls(response)
| | Parse Anthropic format tool calls |parseGoogleToolCalls(response)
| | Parse Google Gemini format tool calls |detectToolCallFormat(response)
| | Detect the format of a response |hasToolCalls(response)
| | Check if a response contains tool calls |
| Function | Description |
|----------|-------------|
| convertToAnthropicFormat(tool) | Convert universal tool to Anthropic format |convertToGoogleFormat(tool)
| | Convert universal tool to Google format |convertFromAnthropicFormat(tool)
| | Convert Anthropic tool to universal format |convertFromGoogleFormat(tool)
| | Convert Google tool to universal format |convertToolTo(tool, target)
| | Convert a tool to any supported format |convertToolsTo(tools, target)
| | Convert multiple tools to any format |applyDescriptionLimit(tool, maxBytes)
| | Truncate tool description if too long |
| Function | Description |
|----------|-------------|
| createPluginLogger(name, minLevel?) | Create a plugin logger with core bridge |hasCoreLogger()
| | Check if running inside Quilltap |getLogLevelFromEnv()
| | Get log level from environment variables |createConsoleLogger(prefix, minLevel?)
| | Create a standalone console logger |createNoopLogger()
| | Create a no-op logger |
Create custom LLM providers for OpenAI-compatible APIs with minimal code:
`typescript
import { OpenAICompatibleProvider } from '@quilltap/plugin-utils';
// Create a provider for any OpenAI-compatible API
export class MyLLMProvider extends OpenAICompatibleProvider {
constructor() {
super({
baseUrl: 'https://api.my-llm-service.com/v1',
providerName: 'MyLLM',
requireApiKey: true,
attachmentErrorMessage: 'MyLLM does not support file attachments',
});
}
}
`
This gives you a complete LLMProvider implementation with:
- Streaming and non-streaming chat completions
- API key validation
- Model listing
- Proper error handling and logging
Configuration Options:
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| baseUrl | string | (required) | API endpoint URL with version path |providerName
| | string | 'OpenAICompatible' | Name used in log messages |requireApiKey
| | boolean | false | Whether API key is mandatory |attachmentErrorMessage
| | string | (default message) | Error shown for attachment failures |
Note: Requires openai as a peer dependency:`bash`
npm install openai
Create roleplay template plugins with built-in validation and logging:
`typescript
import { createSingleTemplatePlugin } from '@quilltap/plugin-utils';
// Simple single-template plugin
export const plugin = createSingleTemplatePlugin({
templateId: 'my-rp-format',
displayName: 'My RP Format',
description: 'A custom roleplay formatting style',
systemPrompt: [FORMATTING INSTRUCTIONS]
1. DIALOGUE: Use quotation marks
2. ACTIONS: Use asterisks like this
3. THOUGHTS: Use angle brackets ,`
tags: ['custom', 'roleplay'],
enableLogging: true,
});
For plugins providing multiple templates:
`typescript
import { createRoleplayTemplatePlugin } from '@quilltap/plugin-utils';
export const plugin = createRoleplayTemplatePlugin({
metadata: {
templateId: 'rp-format-pack',
displayName: 'RP Format Pack',
description: 'A collection of roleplay formats',
},
templates: [
{
name: 'Screenplay',
description: 'Screenplay-style formatting',
systemPrompt: '...',
},
{
name: 'Novel',
description: 'Novel-style prose',
systemPrompt: '...',
},
],
enableLogging: true,
});
`
| Function | Description |
|----------|-------------|
| createRoleplayTemplatePlugin(options) | Create a plugin with full control over metadata and templates |createSingleTemplatePlugin(options)
| | Simplified helper for plugins with a single template |validateTemplateConfig(template)
| | Validate an individual template configuration |validateRoleplayTemplatePlugin(plugin)
| | Validate a complete roleplay template plugin |
`typescript
import { createPluginLogger, parseOpenAIToolCalls } from '@quilltap/plugin-utils';
import type { LLMProvider, LLMParams, LLMResponse, ToolCallRequest } from '@quilltap/plugin-types';
import OpenAI from 'openai';
const logger = createPluginLogger('qtap-plugin-my-provider');
export class MyProvider implements LLMProvider {
private client: OpenAI;
constructor(apiKey: string) {
this.client = new OpenAI({ apiKey });
logger.debug('Provider initialized');
}
async sendMessage(params: LLMParams, apiKey: string): Promise
logger.debug('Sending message', { model: params.model, messageCount: params.messages.length });
try {
const response = await this.client.chat.completions.create({
model: params.model,
messages: params.messages,
tools: params.tools,
});
// Parse tool calls using the utility
const toolCalls = parseOpenAIToolCalls(response);
logger.info('Received response', {
hasToolCalls: toolCalls.length > 0,
tokens: response.usage?.total_tokens,
});
return {
content: response.choices[0].message.content || '',
toolCalls,
usage: {
promptTokens: response.usage?.prompt_tokens || 0,
completionTokens: response.usage?.completion_tokens || 0,
totalTokens: response.usage?.total_tokens || 0,
},
};
} catch (error) {
logger.error('Failed to send message', { model: params.model }, error as Error);
throw error;
}
}
}
``
MIT - Foundry-9 LLC