Vercel AI SDK runtime adapter for VAT agents
npm install @vibe-agent-toolkit/runtime-vercel-ai-sdkVercel AI SDK runtime adapter for VAT (Vibe Agent Toolkit) agents.
Converts VAT archetype agents to Vercel AI SDK primitives, enabling portability across LLM providers (OpenAI, Anthropic, etc.) while maintaining type safety and agent semantics.
``bash`
npm install @vibe-agent-toolkit/runtime-vercel-ai-sdk aior
bun add @vibe-agent-toolkit/runtime-vercel-ai-sdk ai
You'll also need an LLM provider package:
`bash`
npm install @ai-sdk/openai # For OpenAI
npm install @ai-sdk/anthropic # For Anthropic Claude
Converts synchronous, deterministic VAT agents to Vercel AI SDK tools that can be called by LLMs.
Use cases: Validation, transformation, computation, structured data operations.
Archetypes: Pure Function Tool (Archetype 1)
#### Example: Haiku Validator
`typescript
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
import { haikuValidatorAgent } from '@vibe-agent-toolkit/vat-example-cat-agents';
import { HaikuSchema, HaikuValidationResultSchema } from '@vibe-agent-toolkit/vat-example-cat-agents';
import { convertPureFunctionToTool } from '@vibe-agent-toolkit/runtime-vercel-ai-sdk';
// Convert VAT agent to Vercel AI tool
const haikuTool = convertPureFunctionToTool(
haikuValidatorAgent,
HaikuSchema,
HaikuValidationResultSchema
);
// Use with generateText()
const result = await generateText({
model: openai('gpt-4'),
tools: {
validateHaiku: haikuTool.tool
},
prompt:
Write a haiku about an orange cat and validate it using the validateHaiku tool.
});
console.log(result.text);
console.log(result.toolCalls); // Shows validation results
`
#### Batch Conversion
`typescript
import { convertPureFunctionsToTools } from '@vibe-agent-toolkit/runtime-vercel-ai-sdk';
import { haikuValidatorAgent, nameValidatorAgent } from '@vibe-agent-toolkit/vat-example-cat-agents';
import { HaikuSchema, HaikuValidationResultSchema, NameValidationInputSchema, NameValidationResultSchema } from '@vibe-agent-toolkit/vat-example-cat-agents';
const tools = convertPureFunctionsToTools({
validateHaiku: {
agent: haikuValidatorAgent,
inputSchema: HaikuSchema,
outputSchema: HaikuValidationResultSchema,
},
validateName: {
agent: nameValidatorAgent,
inputSchema: NameValidationInputSchema,
outputSchema: NameValidationResultSchema,
},
});
const result = await generateText({
model: openai('gpt-4'),
tools,
prompt: 'Generate and validate cat names and haikus...'
});
`
Converts multi-turn conversational agents to executable functions that maintain conversation history.
Use cases: Interactive dialogs, multi-turn decision-making, stateful conversations, progressive information gathering.
Archetypes: Conversational Assistant (Archetype 3)
#### Example: Breed Advisor
`typescript
import { openai } from '@ai-sdk/openai';
import { breedAdvisorAgent } from '@vibe-agent-toolkit/vat-example-cat-agents';
import { BreedAdvisorInputSchema, BreedAdvisorOutputSchema } from '@vibe-agent-toolkit/vat-example-cat-agents';
import { convertConversationalAssistantToFunction, type ConversationSession } from '@vibe-agent-toolkit/runtime-vercel-ai-sdk';
// Convert VAT agent to executable function
const breedAdvisor = convertConversationalAssistantToFunction(
breedAdvisorAgent,
BreedAdvisorInputSchema,
BreedAdvisorOutputSchema,
{
model: openai('gpt-4'),
temperature: 0.8,
}
);
// Initialize conversation session
const session: ConversationSession = { history: [] };
// Turn 1: Initial inquiry
const turn1 = await breedAdvisor(
{ message: "I'm looking for a cat", sessionState: {} },
session
);
console.log(turn1.reply); // "Great! What's your living situation?"
// Turn 2: Continue conversation (history is maintained)
const turn2 = await breedAdvisor(
{ message: "Small apartment, love jazz music", sessionState: turn1.sessionState },
session
);
console.log(turn2.recommendations); // Breed recommendations based on profile
`
#### Batch Conversion with Independent Sessions
`typescript
import { convertConversationalAssistantsToFunctions } from '@vibe-agent-toolkit/runtime-vercel-ai-sdk';
import { breedAdvisorAgent, petCareAdvisorAgent } from '@vibe-agent-toolkit/vat-example-cat-agents';
const assistants = convertConversationalAssistantsToFunctions(
{
breedAdvisor: {
agent: breedAdvisorAgent,
inputSchema: BreedAdvisorInputSchema,
outputSchema: BreedAdvisorOutputSchema,
},
petCareAdvisor: {
agent: petCareAdvisorAgent,
inputSchema: PetCareInputSchema,
outputSchema: PetCareOutputSchema,
},
},
{
model: openai('gpt-4'),
temperature: 0.8,
}
);
// Each assistant maintains its own independent session
const breedSession: ConversationSession = { history: [] };
const careSession: ConversationSession = { history: [] };
const breedResponse = await assistants.breedAdvisor({ message: "I want a cat" }, breedSession);
const careResponse = await assistants.petCareAdvisor({ message: "Feeding schedule?" }, careSession);
`
Converts single-shot LLM analysis agents to executable functions powered by Vercel AI SDK.
Use cases: Classification, extraction, generation, summarization, sentiment analysis.
Archetypes: One-Shot LLM Analyzer (Archetype 2)
#### Example: Cat Name Generator
`typescript
import { openai } from '@ai-sdk/openai';
import { nameGeneratorAgent } from '@vibe-agent-toolkit/vat-example-cat-agents';
import { NameGeneratorInputSchema, NameSuggestionSchema } from '@vibe-agent-toolkit/vat-example-cat-agents';
import { convertLLMAnalyzerToFunction } from '@vibe-agent-toolkit/runtime-vercel-ai-sdk';
// Convert VAT agent to executable function
const generateName = convertLLMAnalyzerToFunction(
nameGeneratorAgent,
NameGeneratorInputSchema,
NameSuggestionSchema,
{
model: openai('gpt-4'),
temperature: 0.9, // High creativity for name generation
}
);
// Use the function directly
const result = await generateName({
characteristics: {
physical: {
furColor: 'Orange',
size: 'medium',
},
behavioral: {
personality: ['Mischievous', 'Energetic'],
quirks: ['Knocks things off tables'],
},
description: 'A mischievous orange cat who loves causing trouble',
},
});
console.log(result.name); // "Sir Knocksalot"
console.log(result.reasoning); // "Given the cat's tendency to knock..."
console.log(result.alternatives); // ["Lord Tumbleton", "Duke Paws"]
`
#### Batch Conversion with Shared Config
`typescript
import { convertLLMAnalyzersToFunctions } from '@vibe-agent-toolkit/runtime-vercel-ai-sdk';
import { nameGeneratorAgent, haikuGeneratorAgent } from '@vibe-agent-toolkit/vat-example-cat-agents';
const analyzers = convertLLMAnalyzersToFunctions(
{
generateName: {
agent: nameGeneratorAgent,
inputSchema: NameGeneratorInputSchema,
outputSchema: NameSuggestionSchema,
},
generateHaiku: {
agent: haikuGeneratorAgent,
inputSchema: HaikuGeneratorInputSchema,
outputSchema: HaikuSchema,
},
},
{
model: openai('gpt-4'),
temperature: 0.8, // Shared config for all analyzers
}
);
// Use the functions
const name = await analyzers.generateName({ characteristics });
const haiku = await analyzers.generateHaiku({ characteristics });
`
Works with any Vercel AI SDK provider:
`typescript
import { openai } from '@ai-sdk/openai';
import { anthropic } from '@ai-sdk/anthropic';
import { google } from '@ai-sdk/google';
// OpenAI
const llmConfig = { model: openai('gpt-4'), temperature: 0.7 };
// Anthropic Claude
const llmConfig = { model: anthropic('claude-3-5-sonnet-20241022'), temperature: 0.8 };
// Google Gemini
const llmConfig = { model: google('gemini-2.0-flash-001'), temperature: 0.9 };
`
VAT agents support mock mode for testing. When using this adapter, agents always run in real LLM mode:
`typescript
// In VAT agent definition (supports both modes)
export const nameGeneratorAgent = defineLLMAnalyzer(
{ name: 'name-generator', ... },
async (input, ctx) => {
if (ctx.mockable) {
// Fast mock for testing
return mockGenerateName(input);
}
// Real LLM call
const response = await ctx.callLLM(prompt);
return JSON.parse(response);
}
);
// With Vercel AI SDK adapter (always real LLM)
const generateName = convertLLMAnalyzerToFunction(
nameGeneratorAgent,
NameGeneratorInputSchema,
NameSuggestionSchema,
{ model: openai('gpt-4') }
);
// ctx.mockable = false, uses ctx.callLLM() powered by Vercel AI SDK
`
Converts a Conversational Assistant agent to an executable async function with conversation history.
Parameters:
- agent: Agent - The VAT conversational assistant agentinputSchema: z.ZodType
- - Input Zod schemaoutputSchema: z.ZodType
- - Output Zod schemallmConfig: VercelAILLMConfig
- - LLM configuration (model, temperature, etc.)
Returns: (input: TInput, session: ConversationSession) => Promise - Executable async function that requires a session parameter
Session Management:
`typescript`
interface ConversationSession {
history: Message[]; // Maintained across turns
state?: Record
}
Batch converts multiple Conversational Assistant agents with shared LLM config.
Parameters:
- configs: Record - Map of assistant names to conversion configsllmConfig: VercelAILLMConfig
- - Shared LLM configuration
Returns: Record - Map of assistant names to executable functions
Converts a PureFunctionAgent to a Vercel AI SDK tool.
Parameters:
- agent: PureFunctionAgent - The VAT agentinputSchema: z.ZodType
- - Input Zod schemaoutputSchema: z.ZodType
- - Output Zod schema
Returns: ConversionResulttool: VercelAITool
- - The tool ready for use with generateText()inputSchema: z.ZodType
- - Original input schemaoutputSchema: z.ZodType
- - Original output schemametadata
- - Agent name, description, version, archetype
Batch converts multiple PureFunctionAgents to tools.
Parameters:
- configs: Record - Map of tool names to conversion configs
Returns: Record - Map of tool names to Vercel AI tools
Converts an LLM Analyzer agent to an executable async function.
Parameters:
- agent: Agent - The VAT LLM analyzer agentinputSchema: z.ZodType
- - Input Zod schemaoutputSchema: z.ZodType
- - Output Zod schemallmConfig: VercelAILLMConfig
- - LLM configuration (model, temperature, etc.)
Returns: (input: TInput) => Promise - Executable async function
Batch converts multiple LLM Analyzer agents with shared LLM config.
Parameters:
- configs: Record - Map of function names to conversion configsllmConfig: VercelAILLMConfig
- - Shared LLM configuration
Returns: Record - Map of function names to executable functions
`typescript`
interface VercelAILLMConfig {
model: LanguageModelV1; // From Vercel AI SDK
temperature?: number; // 0-1, default 0.7
maxTokens?: number; // Maximum tokens to generate
additionalSettings?: Record
}
`typescript`
interface ToolConversionConfig
agent: PureFunctionAgent
inputSchema: z.ZodType
outputSchema: z.ZodType
}
`typescript`
interface ConversationalAssistantConversionConfig
agent: Agent
inputSchema: z.ZodType
outputSchema: z.ZodType
}
`typescript`
interface LLMAnalyzerConversionConfig
agent: Agent
inputSchema: z.ZodType
outputSchema: z.ZodType
}
See @vibe-agent-toolkit/vat-example-cat-agents for complete agent examples that work with this adapter.
Standard unit tests verify adapter structure and type safety without making real LLM calls:
`bash`
bun run test # Run all unit tests (free, fast)
bun run test:watch # Watch mode for development
LLM regression tests make real API calls to OpenAI and Anthropic to verify end-to-end integration. These tests are:
- Expensive: Cost money (API calls to GPT-4o-mini and Claude 4.5 Sonnet)
- Slow: Take 15-60 seconds depending on API latency
- Skipped by default: Only run when explicitly requested
Run regression tests:
`bashFrom this package directory
bun run test:llm-regression
What they test:
- ✅ Pure function tools work with real LLMs
- ✅ LLM analyzer functions work with OpenAI
- ✅ LLM analyzer functions work with Anthropic Claude
- ✅ Same adapter code works across providers (provider-agnostic architecture)
Requirements:
-
OPENAI_API_KEY environment variable for OpenAI tests
- ANTHROPIC_API_KEY environment variable for Anthropic tests
- Tests gracefully skip if API keys are not setWhen to run:
- Before releases to verify provider integrations still work
- After upgrading
ai or provider packages (e.g., @ai-sdk/openai`)Cost estimate:
- Full test suite: ~4 LLM calls (2 OpenAI, 2 Anthropic)
- Approximate cost: $0.01-0.05 per run (varies by model pricing)
MIT