Runflow SDK - Multi-agent AI framework
npm install @runflow-ai/sdk> A powerful TypeScript-first framework for building intelligent AI agents and multi-agent systems
Runflow SDK is a comprehensive, type-safe framework for building AI agents, complex workflows, and multi-agent systems. Designed to be simple to use yet powerful enough for advanced use cases.




- ๐ค Intelligent Agents - Create agents with LLM, tools, memory, and RAG capabilities
- ๐ง Type-Safe Tools - Build custom tools with Zod schema validation
- ๐ Dynamic Connectors - Connect to any API with runtime schema loading
- ๐ Workflows - Orchestrate complex multi-step processes with conditional logic
- ๐ง Memory Management - Persistent conversation history with automatic summarization
- ๐ Agentic RAG - LLM-driven semantic search in vector knowledge bases
- ๐ฅ Multi-Agent Systems - Supervisor pattern with automatic agent routing
- ๐ Full Observability - Automatic tracing with cost tracking and performance metrics
- ๐ก Streaming Support - Real-time streaming responses with memory persistence
- ๐จ Multi-Modal - Support for text and images (vision models)
- ๐ค Audio Transcription - Automatic audio-to-text with multiple providers (Whisper, Deepgram, etc.)
- ๐ Multiple Providers - OpenAI, Anthropic (Claude), and AWS Bedrock
- Installation
- Quick Start
- Core Concepts
- Agents
- Context Management
- Memory
- Tools
- Connectors
- Workflows
- Knowledge (RAG)
- LLM Standalone
- Media Processing
- Observability
- Advanced Examples
- Real-World Use Cases
- Customer Support Agent with RAG
- Sales Automation with Multi-Step Workflow
- Intelligent Collections Agent (WhatsApp)
- Customer Onboarding Assistant
- Feedback Analysis System
- Multi-Agent System (Supervisor Pattern)
- Configuration
- API Reference
- TypeScript Types
- Providers
- Troubleshooting
- Contributing
- License
---
``bash`
npm install @runflow-ai/sdkor
yarn add @runflow-ai/sdkor
pnpm add @runflow-ai/sdk
- Node.js: >= 22.0.0
- TypeScript: >= 5.0.0 (recommended)
The SDK includes the following libraries out-of-the-box. No need to install them separately - they're available in all your agents:
| Library | Version | Description | Import |
|---------|---------|-------------|--------|
| axios | ^1.7.0 | HTTP client for API requests | import axios from 'axios' |import { z } from 'zod'
| zod | ^3.22.0 | Schema validation and TypeScript inference | |import { format, addDays } from 'date-fns'
| date-fns | ^3.0.0 | Modern date utility library | |import _ from 'lodash'
| lodash | ^4.17.21 | JavaScript utility library | |import * as cheerio from 'cheerio'
| cheerio | ^1.0.0 | Fast, flexible HTML/XML parsing | |import pino from 'pino'
| pino | ^8.19.0 | Fast JSON logger | |
Quick Examples:
`typescript
import { createTool } from '@runflow-ai/sdk';
import { z } from 'zod';
import axios from 'axios';
import { format, addDays } from 'date-fns';
import _ from 'lodash';
const myTool = createTool({
id: 'example-tool',
description: 'Shows all available libraries',
inputSchema: z.object({
url: z.string().url(),
data: z.array(z.any()),
}),
execute: async ({ context }) => {
// โ
HTTP requests with axios
const response = await axios.get(context.url);
// โ
Date manipulation
const tomorrow = addDays(new Date(), 1);
const formatted = format(tomorrow, 'yyyy-MM-dd');
// โ
Array/Object utilities with lodash
const unique = _.uniq(context.data);
const grouped = _.groupBy(context.data, 'category');
return { response: response.data, date: formatted, unique, grouped };
},
});
`
> ๐ก Tip: You can also use the SDK's HTTP helpers for convenience:
> `typescript`
> import { httpGet, httpPost } from '@runflow-ai/sdk/http';
> const data = await httpGet('https://api.example.com/data');
>
---
> Note on Parameters:
> - message is required (the user's message)companyId
> - is optional (for multi-tenant applications - your end-user's company ID)sessionId
> - is optional but recommended (maintains conversation history)userId
> - is optional (for user identification)Runflow.identify()
> - All other fields are optional and can be set via or environment variables
`typescript
import { Agent, openai } from '@runflow-ai/sdk';
// Create a basic agent
const agent = new Agent({
name: 'Support Agent',
instructions: 'You are a helpful customer support assistant.',
model: openai('gpt-4o'),
});
// Process a message
const result = await agent.process({
message: 'I need help with my order', // Required
sessionId: 'session_456', // Optional: For conversation history
userId: 'user_789', // Optional: User identifier
companyId: 'company_123', // Optional: For multi-tenant apps
});
console.log(result.message);
`
`typescript
import { Agent, openai } from '@runflow-ai/sdk';
const agent = new Agent({
name: 'Support Agent',
instructions: 'You are a helpful assistant with memory.',
model: openai('gpt-4o'),
memory: {
maxTurns: 10,
},
});
// First interaction
await agent.process({
message: 'My name is John',
sessionId: 'session_456', // Same session for conversation continuity
});
// Second interaction - agent remembers the name
const result = await agent.process({
message: 'What is my name?',
sessionId: 'session_456', // Same session
});
console.log(result.message); // "Your name is John"
`
`typescript
import { Agent, openai, createTool } from '@runflow-ai/sdk';
import { z } from 'zod';
// Create a custom tool
const weatherTool = createTool({
id: 'get-weather',
description: 'Get current weather for a location',
inputSchema: z.object({
location: z.string(),
}),
execute: async ({ context }) => {
// Fetch weather data
return {
temperature: 22,
condition: 'Sunny',
location: context.location,
};
},
});
// Create agent with tool
const agent = new Agent({
name: 'Weather Agent',
instructions: 'You help users check the weather.',
model: openai('gpt-4o'),
tools: {
weather: weatherTool,
},
});
const result = await agent.process({
message: 'What is the weather in Sรฃo Paulo?',
});
console.log(result.message);
`
`typescript
import { Agent, openai } from '@runflow-ai/sdk';
const agent = new Agent({
name: 'Support Agent',
instructions: 'You are a helpful support agent.',
model: openai('gpt-4o'),
rag: {
vectorStore: 'support-docs',
k: 5,
threshold: 0.7,
searchPrompt: Use searchKnowledge tool when user asks about:
- Technical problems
- Process questions
- Specific information,
},
});
// Agent automatically has 'searchKnowledge' tool
// LLM decides when to use it (not always searching - more efficient!)
const result = await agent.process({
message: 'How do I reset my password?',
});
`
---
Agents are the fundamental building blocks of the Runflow SDK. Each agent is configured with:
- Name: Agent identifier
- Instructions: Behavior instructions (system prompt)
- Model: LLM model to use (OpenAI, Anthropic, Bedrock)
- Tools: Available tools for the agent
- Memory: Memory configuration
- RAG: Knowledge base search configuration
#### Complete Agent Configuration
`typescript
import { Agent, anthropic } from '@runflow-ai/sdk';
const agent = new Agent({
name: 'Advanced Support Agent',
instructions: You are an expert customer support agent.
- Always be polite and helpful
- Solve problems efficiently
- Use tools when needed,
// Model
model: anthropic('claude-3-5-sonnet-20241022'),
// Model configuration
modelConfig: {
temperature: 0.7,
maxTokens: 4000,
topP: 0.9,
frequencyPenalty: 0,
presencePenalty: 0,
},
// Memory
memory: {
maxTurns: 20,
summarizeAfter: 50,
summarizePrompt: 'Create a concise summary highlighting key points and decisions',
summarizeModel: openai('gpt-4o-mini'), // Cheaper model for summaries
},
// RAG (Agentic - LLM decides when to search)
rag: {
vectorStore: 'support-docs',
k: 5,
threshold: 0.7,
searchPrompt: 'Use for technical questions',
},
// Tools
tools: {
createTicket: ticketTool,
searchOrders: orderTool,
},
// Tool iteration limit
maxToolIterations: 10,
// Streaming
streaming: {
enabled: true,
},
// Debug mode
debug: true,
});
`
#### Supported Models
`typescript
import { openai, anthropic, bedrock } from '@runflow-ai/sdk';
// OpenAI
const gpt4 = openai('gpt-4o');
const gpt4mini = openai('gpt-4o-mini');
const gpt4turbo = openai('gpt-4-turbo');
const gpt35 = openai('gpt-3.5-turbo');
// Anthropic (Claude)
const claude35 = anthropic('claude-3-5-sonnet-20241022');
const claude3opus = anthropic('claude-3-opus-20240229');
const claude3sonnet = anthropic('claude-3-sonnet-20240229');
const claude3haiku = anthropic('claude-3-haiku-20240307');
// AWS Bedrock
const claudeBedrock = bedrock('anthropic.claude-3-sonnet-20240229-v1:0');
const titan = bedrock('amazon.titan-text-express-v1');
`
#### Agent Methods
`typescript
// Process a message
const result = await agent.process(input: AgentInput): Promise
// Stream a message
const stream = await agent.processStream(input: AgentInput): AsyncIterable
// Simple generation (without full agent context)
const response = await agent.generate(input: string | Message[]): Promise<{ text: string }>;
// Streaming generation
const stream = await agent.generateStream(prompt: string): AsyncIterable
// Generation with tools
const response = await agent.generateWithTools(input): Promise<{ text: string }>;
`
#### Multi-Agent Systems (Supervisor Pattern)
`typescript
const supervisor = new Agent({
name: 'Supervisor',
instructions: 'Route tasks to appropriate agents.',
model: openai('gpt-4o'),
agents: {
support: {
name: 'Support Agent',
instructions: 'Handle support requests.',
model: openai('gpt-4o-mini'),
},
sales: {
name: 'Sales Agent',
instructions: 'Handle sales inquiries.',
model: openai('gpt-4o-mini'),
},
},
});
// Supervisor automatically routes to the appropriate agent
await supervisor.process({
message: 'I want to buy your product',
sessionId: 'session_123',
});
`
#### Debug Mode
`typescript
const agent = new Agent({
name: 'Debug Agent',
instructions: 'Help users',
model: openai('gpt-4o'),
// Simple debug (all logs enabled)
debug: true,
// Or detailed debug configuration
debug: {
enabled: true,
logMessages: true, // Log messages
logLLMCalls: true, // Log LLM API calls
logToolCalls: true, // Log tool executions
logRAG: true, // Log RAG searches
logMemory: true, // Log memory operations
truncateAt: 1000, // Truncate logs at N characters
},
});
`
---
The Runflow Context is a global singleton that manages execution information and user identification. It allows you to identify once and all agents/workflows automatically use this context.
#### Basic Usage
`typescript
import { Runflow, Agent, openai } from '@runflow-ai/sdk';
// Identify user by phone (WhatsApp)
Runflow.identify({
type: 'phone',
value: '+5511999999999',
});
// Agent automatically uses the context
const agent = new Agent({
name: 'WhatsApp Bot',
instructions: 'You are a helpful assistant.',
model: openai('gpt-4o'),
memory: {
maxTurns: 10,
},
});
// Memory is automatically bound to the phone number
await agent.process({
message: 'Hello!',
});
`
#### Smart Identification (Auto-Detection)
New in v2.1: The identify() function now auto-detects entity type from value format:
`typescript
import { identify } from '@runflow-ai/sdk/observability';
// Auto-detect email
identify('user@example.com');
// โ type: 'email', value: 'user@example.com'
// Auto-detect phone (international)
identify('+5511999999999');
// โ type: 'phone', value: '+5511999999999'
// Auto-detect phone (local with formatting)
identify('(11) 99999-9999');
// โ type: 'phone', value: '(11) 99999-9999'
// Auto-detect UUID
identify('550e8400-e29b-41d4-a716-446655440000');
// โ type: 'uuid'
// Auto-detect URL
identify('https://example.com');
// โ type: 'url'
`
Supported patterns:
- Email: Standard RFC 5322 format
- Phone: E.164 format (with/without +, with/without formatting)
- UUID: Standard UUID v1-v5
- URL: With or without protocol
- Fallback: Generic id type for custom identifiers
#### Explicit Identification
For custom entity types or when auto-detection is not desired:
`typescript
import { identify } from '@runflow-ai/sdk/observability';
// HubSpot Contact
identify({
type: 'hubspot_contact',
value: 'contact_123',
userId: 'user@example.com',
});
// Order/Ticket
identify({
type: 'order',
value: 'ORDER-456',
userId: 'customer_789',
});
// Custom threadId override
identify({
type: 'document',
value: 'doc_456',
threadId: 'custom_thread_123',
});
`
#### Backward Compatibility
The old API still works:
`typescript
import { Runflow } from '@runflow-ai/sdk/core';
Runflow.identify({
type: 'email',
value: 'user@example.com',
});
`
#### State Management
`typescript
// Get complete state
const state = Runflow.getState();
// Get specific value
const threadId = Runflow.get('threadId');
const entityType = Runflow.get('entityType');
// Set custom state (advanced)
Runflow.setState({
entityType: 'custom',
entityValue: 'xyz',
threadId: 'my_custom_thread_123',
userId: 'user_123',
metadata: { custom: 'data' },
});
// Clear state (useful for testing)
Runflow.clearState();
`
---
The Memory system intelligently manages conversation history.
#### Memory Integrated in Agent
`typescript`
const agent = new Agent({
name: 'Memory Agent',
instructions: 'You remember everything.',
model: openai('gpt-4o'),
memory: {
maxTurns: 20, // Limit turns
maxTokens: 4000, // Limit tokens
summarizeAfter: 50, // Summarize after N turns
summarizePrompt: 'Create a concise summary with key facts and action items',
summarizeModel: openai('gpt-4o-mini'), // Cheaper model for summaries
},
});
#### Standalone Memory Manager
`typescript
import { Memory } from '@runflow-ai/sdk';
// Using static methods (most common - 99% of cases)
await Memory.append({
role: 'user',
content: 'Hello!',
timestamp: new Date(),
});
await Memory.append({
role: 'assistant',
content: 'Hi! How can I help you?',
timestamp: new Date(),
});
// Get formatted history
const history = await Memory.getFormatted();
console.log(history);
// Get recent messages
const recent = await Memory.getRecent(5); // Last 5 turns
// Search in memory
const results = await Memory.search('order');
// Check if memory exists
const exists = await Memory.exists();
// Get full memory data
const data = await Memory.get();
// Clear memory
await Memory.clear();
`
#### Memory with Runflow Context
`typescript
import { Runflow, Memory } from '@runflow-ai/sdk';
// Identify user
Runflow.identify({
type: 'phone',
value: '+5511999999999',
});
// Memory automatically uses the context
await Memory.append({
role: 'user',
content: 'My order number is 12345',
timestamp: new Date(),
});
// Memory is automatically bound to the phone number
`
#### Custom Memory Key
`typescript
// Create memory with custom key
const memory = new Memory({
memoryKey: 'custom_key_123',
maxTurns: 10,
});
// Now use instance methods
await memory.append({ role: 'user', content: 'Hello', timestamp: new Date() });
const history = await memory.getFormatted();
`
#### Cross-Session Access
`typescript
// Access memory from different sessions (admin, analytics, etc)
const dataUser1 = await Memory.get('phone:+5511999999999');
const dataUser2 = await Memory.get('email:user@example.com');
// Search across multiple sessions
const results = await Promise.all([
Memory.search('bug', 'user:123'),
Memory.search('bug', 'user:456'),
Memory.search('bug', 'user:789'),
]);
// Get recent from specific session
const recent = await Memory.getRecent(5, 'session:abc123');
// Clear specific session
await Memory.clear('phone:+5511999999999');
`
#### Custom Summarization
`typescriptSummarize in 3 bullet points:
// Agent with custom summarization
const agent = new Agent({
name: 'Smart Agent',
model: openai('gpt-4o'),
memory: {
summarizeAfter: 30,
summarizePrompt:
- Main issue discussed
- Solution provided
- Next steps,
summarizeModel: anthropic('claude-3-haiku'), // Fast & cheap
},
});
// Manual summarization with custom prompt
const summary = await Memory.summarize({
prompt: 'Extract only the key decisions from this conversation',
model: openai('gpt-4o-mini'),
});
`
---
Tools are functions that agents can call to perform specific actions. The SDK uses Zod for type-safe validation.
#### Create Basic Tool
`typescript
import { createTool } from '@runflow-ai/sdk';
import { z } from 'zod';
const weatherTool = createTool({
id: 'get-weather',
description: 'Get current weather for a location',
inputSchema: z.object({
location: z.string().describe('City name'),
units: z.enum(['celsius', 'fahrenheit']).optional(),
}),
outputSchema: z.object({
temperature: z.number(),
condition: z.string(),
}),
execute: async ({ context, runflow, projectId }) => {
// Implement logic
const weather = await fetchWeather(context.location);
return {
temperature: weather.temp,
condition: weather.condition,
};
},
});
`
#### Tool with Runflow API
`typescript
const searchDocsTool = createTool({
id: 'search-docs',
description: 'Search in documentation',
inputSchema: z.object({
query: z.string(),
}),
execute: async ({ context, runflow }) => {
// Use Runflow API for vector search
const results = await runflow.vectorSearch(context.query, {
vectorStore: 'docs',
k: 5,
});
return {
results: results.results.map(r => r.content),
};
},
});
`
#### Tool with Connector
`typescript
const createTicketTool = createTool({
id: 'create-ticket',
description: 'Create a support ticket',
inputSchema: z.object({
subject: z.string(),
description: z.string(),
priority: z.enum(['low', 'medium', 'high']),
}),
execute: async ({ context, runflow }) => {
// Use connector
const ticket = await runflow.connector(
'hubspot',
'create-ticket', // resource slug
{
subject: context.subject,
content: context.description,
priority: context.priority,
}
);
return { ticketId: ticket.id };
},
});
`
#### Tool Execution Context
The execute function receives:context
- : Validated input parameters (from inputSchema)runflow
- : Runflow API client for vector search, connectors, memoryprojectId
- : Current project ID
---
The HTTP module provides pre-configured utilities for making HTTP requests in tools and agents. Built on top of axios, it comes with sensible defaults, automatic error handling, and full TypeScript support.
#### Features
- ๐ Pre-configured axios instance with 30s timeout
- ๐ก๏ธ Automatic error handling with enhanced error messages
- ๐ฏ Helper functions for common HTTP methods (GET, POST, PUT, PATCH, DELETE)
- ๐ฆ Zero configuration - works out of the box
- ๐ Type-safe - Full TypeScript support with exported types
- โก Available in all agents - No need to install additional dependencies
#### Quick Start
`typescript
import { createTool } from '@runflow-ai/sdk';
import { http, httpGet, httpPost } from '@runflow-ai/sdk/http';
import { z } from 'zod';
const weatherTool = createTool({
id: 'get-weather',
description: 'Get current weather for a city',
inputSchema: z.object({
city: z.string(),
}),
execute: async ({ context }) => {
try {
// Option 1: Using httpGet helper (simplest)
const data = await httpGet('https://api.openweathermap.org/data/2.5/weather', {
params: {
q: context.city,
appid: process.env.OPENWEATHER_API_KEY,
units: 'metric',
},
});
return {
city: data.name,
temperature: data.main.temp,
condition: data.weather[0].description,
};
} catch (error: any) {
return { error: Failed to fetch weather: ${error.message} };`
}
},
});
#### Helper Functions
The SDK provides convenient helper functions that automatically extract data from responses:
`typescript
import { httpGet, httpPost, httpPut, httpPatch, httpDelete } from '@runflow-ai/sdk/http';
// GET request - returns only the data payload
const user = await httpGet('https://api.example.com/users/123');
console.log(user.name);
// POST request
const newUser = await httpPost('https://api.example.com/users', {
name: 'John Doe',
email: 'john@example.com',
});
// PUT request
const updated = await httpPut('https://api.example.com/users/123', {
name: 'Jane Doe',
});
// PATCH request
const patched = await httpPatch('https://api.example.com/users/123', {
email: 'newemail@example.com',
});
// DELETE request
await httpDelete('https://api.example.com/users/123');
`
#### Using the HTTP Instance
For more control, use the pre-configured http instance directly:
`typescript
import { http } from '@runflow-ai/sdk/http';
// GET with full response
const response = await http.get('https://api.example.com/data');
console.log(response.status);
console.log(response.headers);
console.log(response.data);
// POST with custom headers
const response = await http.post(
'https://api.example.com/resource',
{ data: 'value' },
{
headers: {
'Authorization': Bearer ${process.env.API_TOKEN},
'Content-Type': 'application/json',
},
timeout: 5000,
}
);
// Multiple requests in parallel
const [users, posts, comments] = await Promise.all([
http.get('https://api.example.com/users'),
http.get('https://api.example.com/posts'),
http.get('https://api.example.com/comments'),
]);
`
#### Advanced: Direct Axios Usage
For complete control, use axios directly:
`typescript
import { axios } from '@runflow-ai/sdk/http';
// Create a custom instance
const customAPI = axios.create({
baseURL: 'https://api.example.com',
headers: {
'Authorization': Bearer ${process.env.API_TOKEN},
},
timeout: 10000,
});
// Add interceptors
customAPI.interceptors.request.use((config) => {
console.log(Request: ${config.method?.toUpperCase()} ${config.url});
return config;
});
// Use the custom instance
const response = await customAPI.get('/users');
`
#### Error Handling
All HTTP utilities provide enhanced error messages:
`typescript
import { httpGet } from '@runflow-ai/sdk/http';
try {
const data = await httpGet('https://api.example.com/data');
return { success: true, data };
} catch (error: any) {
// Error message includes HTTP status and details
console.error(error.message);
// "HTTP GET failed: HTTP 404: Not Found"
return { success: false, error: error.message };
}
`
#### TypeScript Types
All axios types are re-exported for convenience:
`typescript
import type {
AxiosInstance,
AxiosRequestConfig,
AxiosResponse,
AxiosError,
} from '@runflow-ai/sdk/http';
async function fetchData(
url: string,
config?: AxiosRequestConfig
): Promise
const response = await http.get(url, config);
return response;
}
`
#### Complete Example: Weather Tool
`typescript
import { Agent, openai, createTool } from '@runflow-ai/sdk';
import { httpGet } from '@runflow-ai/sdk/http';
import { z } from 'zod';
const weatherTool = createTool({
id: 'get-weather',
description: 'Get current weather for any city',
inputSchema: z.object({
city: z.string().describe('City name (e.g., "Sรฃo Paulo", "New York")'),
}),
execute: async ({ context }) => {
try {
const apiKey = process.env.OPENWEATHER_API_KEY;
const data = await httpGet('https://api.openweathermap.org/data/2.5/weather', {
params: {
q: context.city,
appid: apiKey,
units: 'metric',
lang: 'pt_br',
},
timeout: 5000,
});
return {
city: data.name,
temperature: data.main.temp,
feelsLike: data.main.feels_like,
condition: data.weather[0].description,
humidity: data.main.humidity,
windSpeed: data.wind.speed,
};
} catch (error: any) {
if (error.message.includes('404')) {
return { error: City "${context.city}" not found };Weather API error: ${error.message}
}
throw new Error();
}
},
});
const agent = new Agent({
name: 'Weather Assistant',
instructions: 'You help users check the weather. Use the weather tool when users ask about weather conditions.',
model: openai('gpt-4o'),
tools: {
weather: weatherTool,
},
});
// Use the agent
const result = await agent.process({
message: 'What is the weather like in Sรฃo Paulo?',
});
`
---
Connectors are dynamic integrations with external services defined in the Runflow backend. They support two modes of usage:
1. As Tools - For agent execution (LLM decides when to call)
2. Direct Invocation - For programmatic execution (you control when to call)
#### Key Features
- ๐ Dynamic Schema Loading - Schemas are fetched from the backend automatically
- ๐ญ Transparent Mocking - Enable mock mode for development and testing
- ๐ฃ๏ธ Path Parameter Resolution - Automatic extraction and URL building
- โก Lazy Initialization - Schemas loaded only when needed, cached globally
- ๐ Flexible Authentication - Supports API Key, Bearer Token, Basic Auth, OAuth2
- ๐ Multiple Credentials - Override credentials per execution (multi-tenant support)
- โ
Type-Safe - Automatic JSON Schema โ Zod โ LLM Parameters conversion
---
#### Usage Mode 1: As Agent Tool
Use connectors as tools that the LLM can call automatically
> ๐ก Resource Identifier: Use the resource slug (e.g., get-customers, list-users) which is auto-generated from the resource name.
> Slugs are stable, URL-safe identifiers that won't break if you rename the resource display name.
`typescript
import { createConnectorTool, Agent, openai } from '@runflow-ai/sdk';
// Basic connector tool (schema loaded from backend)
const getClienteTool = createConnectorTool({
connector: 'api-contabil', // Connector instance slug
resource: 'get-customers', // Resource slug
description: 'Get customer by ID from accounting API',
enableMock: true, // Optional: enables mock mode
});
// Use with Agent
const agent = new Agent({
name: 'Accounting Agent',
instructions: 'You help manage customers in the accounting system.',
model: openai('gpt-4o'),
tools: {
getCliente: getClienteTool,
listClientes: createConnectorTool({
connector: 'api-contabil',
resource: 'list-customers', // Resource slug
}),
},
});
// First execution automatically loads schemas from backend
const result = await agent.process({
message: 'Get customer with ID 123',
sessionId: 'session-123',
companyId: 'company-456',
});
`
---
#### Usage Mode 2: Direct Invocation
Invoke connectors directly without agent involvement:
> ๐ก Identifiers:
> - Connector: Use the instance slug (e.g., hubspot-prod) - recommended over display namecreate-contact
> - Resource: Use the resource slug (e.g., ) - auto-generated from resource name
`typescript
import { connector } from '@runflow-ai/sdk/connectors';
import type { ConnectorExecutionOptions } from '@runflow-ai/sdk';
// Direct connector call (using slugs - recommended)
const result = await connector(
'hubspot-prod', // connector instance slug
'create-contact', // resource slug
{ // data
email: 'john@example.com',
firstname: 'John',
lastname: 'Doe'
}
);
console.log('Contact created:', result);
`
With execution options:
`typescript
const options: ConnectorExecutionOptions = {
credentialId: 'cred-prod-123', // Override credential
timeout: 10000, // 10 seconds timeout
retries: 3, // Retry 3 times on failure
useMock: false, // Use real API
};
const result = await connector(
'api-contabil',
'get-customer', // Resource slug
{ id: 123 },
options
);
`
Multi-tenant example:
`typescript
// Different credentials per customer
async function createContactForCustomer(customerId: string, contactData: any) {
// Get customer's HubSpot credential
const credentialId = await getCustomerCredential(customerId, 'hubspot');
return await connector(
'hubspot',
'create-contact', // Resource slug
contactData,
{ credentialId }
);
}
// Usage
await createContactForCustomer('customer-1', { email: 'john@acme.com' });
await createContactForCustomer('customer-2', { email: 'jane@techcorp.com' });
`
Custom headers (override everything):
`typescript`
// Custom headers have HIGHEST priority
const result = await connector(
'hubspot',
'create-contact', // Resource slug
{ email: 'test@example.com' },
{
headers: {
'Authorization': 'Bearer temp-test-token',
'X-Request-ID': generateId(),
}
}
);
Authentication Priority:
1. Custom headers (highest - overrides everything)
2. credentialId override (runtime override)
3. Instance credential (default from connector instance)
4. No authentication
---
#### Connector Tool Configuration
`typescript`
createConnectorTool({
connector: string, // Connector instance slug (e.g., 'hubspot-prod', 'api-contabil')
resource: string, // Resource slug (e.g., 'get-contacts', 'list-customers', 'create-ticket')
description?: string, // Optional: Custom description (defaults to auto-generated)
enableMock?: boolean, // Optional: Enable mock mode (adds useMock parameter)
})
Important Notes:
- connector: Use the instance slug (e.g., hubspot-prod) instead of display nameresource
- : Use the resource slug (e.g., get-users, create-order) - auto-generated from resource name
- Resource lookup priority: slug first โ name fallback (backward compatibility)
#### Multiple Connector Tools
`typescript
// Create multiple tools for the same connector
const tools = {
listClientes: createConnectorTool({
connector: 'api-contabil',
resource: 'list-customers', // Resource slug
enableMock: true,
}),
getCliente: createConnectorTool({
connector: 'api-contabil',
resource: 'get-customer', // Resource slug
enableMock: true,
}),
createCliente: createConnectorTool({
connector: 'api-contabil',
resource: 'create-customer', // Resource slug
enableMock: true,
}),
};
const agent = new Agent({
name: 'Customer Management Agent',
instructions: 'You help manage customers.',
model: openai('gpt-4o'),
tools,
});
// All schemas are loaded in parallel on first execution
const result = await agent.process({
message: 'Create a new customer named ACME Corp',
sessionId: 'session-123',
companyId: 'company-456',
});
`
#### Using loadConnector Helper
For connectors with many resources, use the loadConnector helper:
`typescript
import { loadConnector } from '@runflow-ai/sdk';
const contabil = loadConnector('api-contabil');
const agent = new Agent({
name: 'Accounting Agent',
instructions: 'You manage accounting data.',
model: openai('gpt-4o'),
tools: {
// Using resource slugs
listClientes: contabil.tool('list-customers'),
getCliente: contabil.tool('get-customer'),
createCliente: contabil.tool('create-customer'),
updateCliente: contabil.tool('update-customer'),
},
});
`
#### Path Parameters
Connectors automatically resolve path parameters from the resource URL:
`typescript
// Resource defined in backend with path: /clientes/{id}/pedidos/{pedidoId}
const getClientePedidoTool = createConnectorTool({
connector: 'api-contabil',
resource: 'get-customer-order', // Resource slug
description: 'Get specific order from a customer',
});
// Agent automatically extracts path params from context
const result = await agent.process({
message: 'Get order 456 from customer 123',
sessionId: 'session-123',
companyId: 'company-456',
});
// Backend automatically resolves: /clientes/123/pedidos/456
`
#### Mock Execution
Enable mock mode for development and testing:
`typescript
const tool = createConnectorTool({
connector: 'api-contabil',
resource: 'list-customers', // Resource slug
enableMock: true, // Adds useMock parameter
});
// Use mock mode in development
const result = await agent.process({
message: 'List customers (use mock data)',
sessionId: 'dev-session',
companyId: 'dev-company',
// Tool will automatically include useMock=true if mock data is configured
});
`
#### Complete Example: Both Modes
`typescript
import { Agent, openai, createConnectorTool, connector } from '@runflow-ai/sdk';
// 1. Create tool for agent use
const createContactTool = createConnectorTool({
connector: 'hubspot',
resource: 'contacts',
action: 'create',
description: 'Create contact in HubSpot',
});
const agent = new Agent({
name: 'CRM Agent',
instructions: 'You manage contacts in HubSpot.',
model: openai('gpt-4o'),
tools: { createContact: createContactTool },
});
// 2. Agent decides when to use tool (Mode 1)
await agent.process({
message: 'Create contact for Alice, alice@example.com',
sessionId: 'session-1',
});
// 3. Direct invocation (Mode 2 - you control)
await connector(
'hubspot',
'contacts',
'update',
{
contactId: '123',
status: 'customer'
},
{ credentialId: 'cred-prod' }
);
`
---
#### How It Works
1. Tool Creation: createConnectorTool creates a tool with a temporary schema
2. Lazy Loading: On first agent execution, schemas are fetched from the backend in parallel
3. Schema Conversion: JSON Schema โ Zod โ LLM Parameters (automatic)
4. Caching: Schemas are cached globally to avoid repeated API calls
5. Execution: Tool/API executes with authentication, path resolution, and error handling
#### Automatic Setup
The Agent automatically initializes connector tools on first execution:
`typescript
const agent = new Agent({
name: 'My Agent',
model: openai('gpt-4o'),
tools: {
// Connector tools are automatically identified and initialized
tool1: createConnectorTool({ ... }),
tool2: createTool({ ... }), // Regular tool
tool3: createConnectorTool({ ... }),
},
});
// First process() call:
// 1. Identifies connector tools (marked with _isConnectorTool)
// 2. Loads schemas in parallel from backend
// 3. Updates tool parameters
// 4. Proceeds with normal execution
`
---
Workflows orchestrate multiple agents, functions, and connectors in sequence.
#### Basic Workflow
`typescript
import { createWorkflow, Agent, openai } from '@runflow-ai/sdk';
import { z } from 'zod';
// Define input/output schemas
const inputSchema = z.object({
customerEmail: z.string().email(),
issueDescription: z.string(),
});
const outputSchema = z.object({
ticketId: z.string(),
response: z.string(),
emailSent: z.boolean(),
});
// Create agents
const analyzerAgent = new Agent({
name: 'Issue Analyzer',
instructions: 'Analyze customer issues and categorize them.',
model: openai('gpt-4o'),
});
const responderAgent = new Agent({
name: 'Responder',
instructions: 'Write helpful responses to customers.',
model: openai('gpt-4o'),
});
// Create workflow
const workflow = createWorkflow({
id: 'support-workflow',
name: 'Support Ticket Workflow',
inputSchema,
outputSchema,
})
.agent('analyze', analyzerAgent, {
promptTemplate: 'Analyze this issue: {{input.issueDescription}}',
})
.connector('create-ticket', 'hubspot', 'tickets', 'create', {
subject: '{{analyze.text}}',
content: '{{input.issueDescription}}',
priority: 'medium',
})
.agent('respond', responderAgent, {
promptTemplate: 'Write a response for: {{input.issueDescription}}',
})
.connector('send-email', 'email', 'messages', 'send', {
to: '{{input.customerEmail}}',
subject: 'Your Support Request',
body: '{{respond.text}}',
})
.build();
// Execute workflow
const result = await workflow.execute({
customerEmail: 'customer@example.com',
issueDescription: 'My order has not arrived',
});
console.log(result);
`
#### Workflow with Parallel Steps
`typescript`
const workflow = createWorkflow({
id: 'parallel-workflow',
inputSchema: z.object({ query: z.string() }),
outputSchema: z.any(),
})
.parallel([
createAgentStep('agent1', agent1),
createAgentStep('agent2', agent2),
createAgentStep('agent3', agent3),
], {
waitForAll: true, // Wait for all to complete
})
.function('merge', async (input, context) => {
// Merge results
return {
combined: Object.values(context.stepResults.get('parallel')),
};
})
.build();
#### Workflow with Conditional Steps
`typescript`
const workflow = createWorkflow({
id: 'conditional-workflow',
inputSchema: z.object({ priority: z.string() }),
outputSchema: z.any(),
})
.condition(
'check-priority',
(context) => context.input.priority === 'high',
// True path
[
createAgentStep('urgent-agent', urgentAgent),
createConnectorStep('notify-slack', 'slack', 'messages', 'send', {
channel: '#urgent',
message: 'High priority issue!',
}),
],
// False path
[
createAgentStep('normal-agent', normalAgent),
]
)
.build();
#### Workflow with Retry
`typescript`
const workflow = createWorkflow({
id: 'retry-workflow',
inputSchema: z.object({ data: z.any() }),
outputSchema: z.any(),
})
.then({
id: 'api-call',
type: 'connector',
config: {
connector: 'external-api',
resource: 'data',
action: 'fetch',
parameters: {},
},
retryConfig: {
maxAttempts: 3,
backoff: 'exponential', // 'fixed', 'exponential', 'linear'
delay: 1000,
retryableErrors: ['timeout', 'network'],
},
})
.build();
#### Workflow Step Types
`typescript
import {
createAgentStep,
createFunctionStep,
createConnectorStep
} from '@runflow-ai/sdk';
// Agent step
const agentStep = createAgentStep('step-id', agent, {
promptTemplate: 'Process: {{input.data}}',
});
// Function step
const functionStep = createFunctionStep('step-id', async (input, context) => {
// Custom logic
return { result: 'processed' };
});
// Connector step
const connectorStep = createConnectorStep(
'step-id',
'hubspot',
'contacts',
'create',
{ email: '{{input.email}}' }
);
`
---
The Prompts module manages prompt templates with support for global and tenant-specific prompts.
#### Using loadPrompt() with Agent (Recommended)
The simplest way to use prompts from the portal is with loadPrompt(). It works just like openai() - no await needed!
`typescript
import { Agent, openai, loadPrompt } from '@runflow-ai/sdk';
// Load prompt directly in agent config - no await!
const agent = new Agent({
name: 'Support Agent',
instructions: loadPrompt('customer-support', {
product: 'CRM Pro',
tone: 'professional',
greeting: 'Hello'
}),
model: openai('gpt-4o'),
});
// The prompt is resolved automatically when processing
await agent.process({ message: 'I need help!' });
`
How it works:
1. loadPrompt() creates a lazy reference (no API call yet)agent.process()
2. When runs, the prompt is fetched from the portal
3. Variables are rendered automatically
4. Result is cached for subsequent calls
`typescript
// Without variables
instructions: loadPrompt('simple-prompt')
// With variables (uses {{variable}} syntax in prompt content)
instructions: loadPrompt('customer-support', {
product: 'SaaS Platform',
tone: 'friendly',
language: 'Portuguese'
})
`
#### Standalone Prompts Manager
For more control, use the Prompts class directly:
`typescript
import { Prompts } from '@runflow-ai/sdk';
const prompts = new Prompts();
// Get prompt (global or tenant-specific)
const prompt = await prompts.get('sistema');
console.log(prompt.content);
console.log('Is global?', prompt.isGlobal);
// List all available prompts
const allPrompts = await prompts.list({ limit: 50 });
allPrompts.forEach(p => {
console.log(${p.name} ${p.isGlobal ? '๐' : '๐ข'});
});
// Create tenant-specific prompt
const custom = await prompts.create(
'my-prompt',
'You are a specialist in {{topic}}.',
{ variables: ['topic'] }
);
// Update tenant prompt
await prompts.update('my-prompt', {
content: 'You are a SENIOR specialist in {{topic}}.'
});
// Delete tenant prompt
await prompts.delete('my-prompt');
// Render template with variables
const rendered = prompts.render(
'Hello {{name}}, welcome to {{company}}!',
{ name: 'John', company: 'Runflow' }
);
// Get and render in one call
const text = await prompts.getAndRender('my-prompt', { topic: 'AI' });
`
Security Rules:
- โ
Can read global prompts (provided by Runflow)
- โ
Can create/update/delete own tenant prompts
- โ Cannot modify global prompts
- โ Cannot access other tenants' prompts
---
The Knowledge module (also called RAG) manages semantic search in vector knowledge bases.
#### Standalone Knowledge Manager
`typescript
import { Knowledge } from '@runflow-ai/sdk';
const knowledge = new Knowledge({
vectorStore: 'support-docs',
k: 5,
threshold: 0.7,
});
// Basic search
const results = await knowledge.search('How to reset password?');
results.forEach(result => {
console.log(result.content);
console.log('Score:', result.score);
});
// Get formatted context for LLM
const context = await knowledge.getContext('password reset', { k: 3 });
console.log(context);
`
#### Hybrid Search (Semantic + Keyword)
`typescript`
const results = await knowledge.hybridSearch({
query: 'password reset',
keywords: ['password', 'reset', 'forgot'],
k: 5,
});
#### Multi-Query Search
`typescript`
const results = await knowledge.multiQuery(
'How to reset password?',
{
variants: [
'password recovery',
'forgot password',
'reset credentials',
],
k: 5,
}
);
#### Agentic RAG in Agent
When RAG is configured in an agent, the SDK automatically creates a searchKnowledge tool that the LLM can decide when to use. This is more efficient than always searching, as the LLM only searches when necessary.
`typescript
const agent = new Agent({
name: 'Support Agent',
instructions: 'You are a helpful support agent.',
model: openai('gpt-4o'),
rag: {
vectorStore: 'support-docs',
k: 5,
threshold: 0.7,
// Custom search prompt - guides when to search
searchPrompt: Use searchKnowledge tool when user asks about:
- Technical problems
- Process questions
- Specific information
Don't use for greetings or casual chat.,
toolDescription: 'Search in support documentation for solutions',
},
});
// Agent automatically has 'searchKnowledge' tool
// LLM decides when to search (not always - more efficient!)
const result = await agent.process({
message: 'How do I reset my password?',
});
`
#### Multiple Vector Stores
`typescript`
const agent = new Agent({
name: 'Advanced Support Agent',
instructions: 'Help users with multiple knowledge bases.',
model: openai('gpt-4o'),
rag: {
vectorStores: [
{
id: 'support-docs',
name: 'Support Documentation',
description: 'General support articles',
threshold: 0.7,
k: 5,
searchPrompt: 'Use search_support-docs when user has technical problems or questions',
},
{
id: 'api-docs',
name: 'API Documentation',
description: 'Technical API reference',
threshold: 0.8,
k: 3,
searchPrompt: 'Use search_api-docs when user asks about API endpoints or integration',
},
],
},
});
#### Managing Documents in Knowledge Base
Add text documents to your knowledge base:
`typescript
import { Knowledge } from '@runflow-ai/sdk';
const knowledge = new Knowledge({
vectorStore: 'support-docs',
});
// Add a text document
const result = await knowledge.addDocument(
'How to reset password: Go to settings > security > reset password',
{
title: 'Password Reset Guide',
category: 'authentication',
version: '1.0'
}
);
console.log('Document added:', result.documentId);
`
Upload files (PDF, DOCX, TXT, etc.):
`typescript
import * as fs from 'fs';
// Node.js - Upload from file system
const fileBuffer = fs.readFileSync('./manual.pdf');
const result = await knowledge.addFile(
fileBuffer,
'manual.pdf',
{
title: 'User Manual',
mimeType: 'application/pdf',
metadata: {
department: 'Support',
version: '2.0'
}
}
);
console.log('File uploaded:', result.documentId);
// Browser - Upload from file input
const fileInput = document.querySelector('input[type="file"]');
const file = fileInput.files[0];
const result = await knowledge.addFile(
file,
file.name,
{
title: 'User Upload',
metadata: { source: 'web-portal' }
}
);
`
List and delete documents:
`typescript
// List all documents
const documents = await knowledge.listDocuments({ limit: 50 });
documents.forEach(doc => {
console.log(ID: ${doc.id});Content: ${doc.content.substring(0, 100)}...
console.log();Created: ${doc.createdAt}
console.log();
});
// Delete a document
await knowledge.deleteDocument('document-id-here');
`
#### RAG Interceptor & Rerank
Advanced features for customizing RAG results before they reach the LLM.
Interceptor - Filter & Transform Results:
`typescript${r.content}\n\nSource: ${r.metadata?.url}
const agent = new Agent({
name: 'Smart Agent',
model: openai('gpt-4o'),
rag: {
vectorStore: 'docs',
k: 10,
// Interceptor: Customize results before LLM
onResultsFound: async (results, query) => {
// 1. Filter sensitive data
const filtered = results.filter(r => !r.metadata?.internal);
// 2. Enrich with external data
const enriched = await Promise.all(
filtered.map(async r => ({
...r,
content: ,`
}))
);
return enriched;
},
},
});
Rerank - Improve Relevance:
`typescript
// In Agent
const agent = new Agent({
model: openai('gpt-4o'),
rag: {
vectorStore: 'docs',
k: 10,
// Rerank strategy
rerank: {
enabled: true,
strategy: 'score-boost',
boostKeywords: ['official', 'tutorial', 'guide'],
},
},
});
// In Knowledge standalone
const knowledge = new Knowledge({ vectorStore: 'docs' });
const results = await knowledge.search('query');
// Rerank with custom logic
const reranked = await knowledge.rerank(results, 'query', {
enabled: true,
strategy: 'custom',
customScore: (result, query) => {
let score = result.score;
// Boost recent docs
const daysSince = daysSinceUpdate(result.metadata?.updatedAt);
if (daysSince < 30) score *= 1.5;
// Boost exact matches
if (result.content.includes(query)) score *= 1.3;
return score;
},
});
`
Rerank Strategies:
- reciprocal-rank-fusion - Standard RRF algorithmscore-boost
- - Boost results containing keywordsmetadata-weight
- - Weight by metadata field valuecustom
- - Custom scoring function
Combined - Rerank + Interceptor:
`typescript${r.content}\n\n๐ ${r.metadata?.category}
rag: {
vectorStore: 'docs',
// 1. Rerank first (improve relevance)
rerank: {
enabled: true,
strategy: 'score-boost',
boostKeywords: ['tutorial', 'guide'],
},
// 2. Interceptor after (enrich)
onResultsFound: async (results) => {
return results.map(r => ({
...r,
content: ,`
}));
},
}
---
The LLM module allows you to use language models directly without creating agents.
#### Basic Usage
`typescript
import { LLM } from '@runflow-ai/sdk';
// Create LLM
const llm = LLM.openai('gpt-4o', {
temperature: 0.7,
maxTokens: 2000,
});
// Generate response
const response = await llm.generate('What is the capital of Brazil?');
console.log(response.text);
console.log('Tokens:', response.usage);
`
#### With Messages
`typescript`
const response = await llm.generate([
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Tell me a joke.' },
]);
#### With System Prompt
`typescript`
const response = await llm.generate(
'What is 2+2?',
{
system: 'You are a math teacher.',
temperature: 0.1,
}
);
#### Streaming
`typescript
const stream = llm.generateStream('Tell me a story');
for await (const chunk of stream) {
if (!chunk.done) {
process.stdout.write(chunk.text);
}
}
`
#### Factory Methods
`typescript
import { LLM } from '@runflow-ai/sdk';
// OpenAI
const gpt4 = LLM.openai('gpt-4o', { temperature: 0.7 });
// Anthropic (Claude)
const claude = LLM.anthropic('claude-3-5-sonnet-20241022', {
temperature: 0.9,
maxTokens: 4000,
});
// Bedrock
const bedrock = LLM.bedrock('anthropic.claude-3-sonnet-20240229-v1:0', {
temperature: 0.8,
});
`
---
Process audio, images, and other media types automatically.
#### Audio Transcription
Transcribe audio files to text using multiple providers:
`typescript
import { transcribe, Media } from '@runflow-ai/sdk';
// Standalone function (default: OpenAI Whisper)
const result = await transcribe({
audioUrl: 'https://example.com/audio.ogg',
language: 'pt',
});
console.log(result.text); // "Olรก, como vai?"
// Using specific provider
const result2 = await transcribe({
audioUrl: 'https://example.com/audio.ogg',
provider: 'deepgram', // openai | deepgram | assemblyai | google
language: 'pt',
});
// Or via Media class
const result3 = await Media.transcribe({
audioUrl: 'https://example.com/audio.ogg',
provider: 'openai',
});
`
#### Supported Providers
| Provider | Status | Description |
|----------|--------|-------------|
| openai | โ
Available | OpenAI Whisper (default) |deepgram
| | ๐ Coming | Deepgram |assemblyai
| | ๐ Coming | AssemblyAI |google
| | ๐ Coming | Google Speech-to-Text |
#### Agent with Auto Media Processing
Configure agents to automatically process media files:
`typescript
import { Agent, openai } from '@runflow-ai/sdk';
const agent = new Agent({
name: 'WhatsApp Assistant',
instructions: 'You are a helpful assistant.',
model: openai('gpt-4o'),
// Auto media processing
media: {
transcribeAudio: true, // Transcribe audio files automatically
processImages: true, // Process images as multimodal (GPT-4o Vision)
audioProvider: 'openai', // Transcription provider
audioLanguage: 'pt', // Default language for transcription
},
});
// Audio files are automatically transcribed before processing
const result = await agent.process({
message: '', // Can be empty when file has audio
file: {
url: 'https://zenvia.com/storage/audio.ogg',
contentType: 'audio/ogg',
caption: 'Voice message', // Optional
},
});
// Images are automatically processed as multimodal
const result2 = await agent.process({
message: 'What is in this image?',
file: {
url: 'https://example.com/image.jpg',
contentType: 'image/jpeg',
},
});
`
#### Media Config Options
`typescript`
interface MediaConfig {
transcribeAudio?: boolean; // Auto-transcribe audio (default: false)
processImages?: boolean; // Auto-process images (default: false)
audioLanguage?: string; // Language code (pt, en, es, etc.)
audioProvider?: TranscribeProvider; // openai | deepgram | assemblyai | google
audioModel?: string; // Provider-specific model
}
---
The Observability system automatically collects execution traces for analysis and debugging.
#### Automatic Tracing (Agent)
`typescript
// Traces are collected automatically
const agent = new Agent({
name: 'Support Agent',
instructions: 'Help customers.',
model: openai('gpt-4o'),
});
// Each execution automatically generates traces
await agent.process({
message: 'Help me',
companyId: 'company_123', // Optional
sessionId: 'session_456', // Optional
executionId: 'exec_123', // Optional
threadId: 'thread_789', // Optional
});
`
#### Manual Tracing
`typescript
import { createTraceCollector } from '@runflow-ai/sdk';
// Create collector
const collector = createTraceCollector(apiClient, 'project_123', {
batchSize: 10,
flushInterval: 5000,
maxRetries: 3,
});
// Start span
const span = collector.startSpan('custom_operation', {
agentName: 'Custom Agent',
model: 'gpt-4o',
});
span.setInput({ data: 'input' });
try {
// Execute operation
const result = await doSomething();
span.setOutput(result);
span.setCosts({
tokens: { input: 100, output: 50, total: 150 },
costs: {
inputCost: 0.003,
outputCost: 0.002,
totalCost: 0.005,
currency: 'USD'
},
});
} catch (error) {
span.setError(error);
}
span.finish();
// Force flush
await collector.flush();
`
#### Decorator for Auto-Tracing
`typescript
import { traced } from '@runflow-ai/sdk';
class MyService {
private traceCollector: TraceCollector;
@traced('my_operation', { agentName: 'My Agent' })
async myMethod(input: any) {
// Automatically traced
return processData(input);
}
}
`
#### Local Traces (Development)
`bash`.env
RUNFLOW_LOCAL_TRACES=true
Traces will be saved to .runflow/traces.json in a structured format organized by executionId for analysis.
#### Trace Types
The SDK supports various trace types:
- agent_execution - Full agent processingworkflow_execution
- - Workflow processingworkflow_step
- - Individual workflow steptool_call
- - Tool executionllm_call
- - LLM API callvector_search
- - Vector search operationmemory_operation
- - Memory accessconnector_call
- - Connector executionstreaming_session
- - Streaming responseexecution_summary
- - Custom execution (new)custom_event
- - Custom log event (new)error_event
- - Error logging (new)
#### Custom Executions (Non-Agent Flows)
For scenarios without agent.process() (document analysis, batch processing, etc.):
`typescript
import { identify, startExecution, log } from '@runflow-ai/sdk/observability';
export async function analyzeDocument(docId: string) {
// 1. Identify context
identify({ type: 'document', value: docId });
// 2. Start custom execution
const exec = startExecution({
name: 'document-analysis',
input: { documentId: docId }
});
try {
// 3. Process with LLM calls
const llm = LLM.openai('gpt-4o');
const text = await llm.chat("Extract text from document...");
exec.log('text_extracted', { length: text.length });
const category = await llm.chat(Classify this: ${text});Summarize: ${text}
exec.log('document_classified', { category });
const summary = await llm.chat();`
// 4. Finish with custom output
await exec.end({
output: {
summary,
category,
documentId: docId
}
});
return { summary, category };
} catch (error) {
exec.setError(error);
await exec.end();
throw error;
}
}
In the Portal:
``
Thread: document_xxx_doc_456
โโ Execution: "document-analysis"
โโ llm_call: Extract text
โโ custom_event: text_extracted
โโ llm_call: Classify
โโ custom_event: document_classified
โโ llm_call: Summarize
#### Custom Logging
Log custom events within any execution:
`typescript
import { log, logEvent, logError } from '@runflow-ai/sdk/observability';
// Simple log
log('cache_hit', { key: 'user_123' });
// Structured log
logEvent('validation', {
input: { orderId: '123', amount: 100 },
output: { valid: true, score: 0.95 },
metadata: { rule: 'fraud_detection' }
});
// Error log
try {
await riskyOperation();
} catch (error) {
logError('operation_failed', error);
throw error;
}
`
Logs are automatically associated with the current execution and flushed with other traces.
#### Exception Safety
The SDK ensures traces are not lost even on crashes:
- Exit handlers: Auto-flush on process.exit(), SIGTERM, SIGINTuncaughtException
- Exception handlers: Flush on and unhandledRejection
- Auto-cleanup: Custom executions auto-flush after 60s if not manually ended
- Worker safety: Execution engine waits 100ms for pending flushes
Coverage: ~95% trace recovery even on crashes.
#### Verbose Tracing Mode
New in v2.1: Control how much data is saved in traces - from minimal metadata to complete prompts and responses.
Modes:
- minimal: Only essential metadata (production, minimal storage)
- standard: Balanced metadata + sizes (default)
- full: Complete data including prompts and responses (debugging)
Simple API (string preset):
`typescript`
const agent = new Agent({
name: 'My Agent',
model: openai('gpt-4o'),
observability: 'full' // 'minimal', 'standard', or 'full'
});
Granular Control (object config):
`typescript`
const agent = new Agent({
name: 'My Agent',
model: openai('gpt-4o'),
observability: {
mode: 'standard', // Base mode
verboseLLM: true, // Override: save complete prompts
verboseMemory: false, // Override: keep memory minimal
verboseTools: true, // Override: save tool data (default)
maxInputLength: 5000, // Truncate large inputs
maxOutputLength: 5000, // Truncate large outputs
}
});
Environment Variable:
`bash`.env
RUNFLOW_VERBOSE_TRACING=true # Auto-sets mode to 'full'
`typescript`
// Auto-detects from environment
const agent = new Agent({
name: 'My Agent',
model: openai('gpt-4o'),
// observability: 'full' auto-applied if env var set
});
What Each Mode Saves:
| Trace Type | Minimal | Standard | Full |
|------------|---------|----------|------|
| LLM Call | messagesCount, config | messagesCount, config | Complete messages + responses |
| Memory Load | messagesCount | messagesCount | First 10 messages (truncated) |
| Memory Save | messagesSaved | messagesSaved | User + assistant messages |
| Tool Call | Always full (with truncation) | Always full | Always full |
| Agent Execution | Always full | Always full | Always full |
Storage Impact:
- Minimal: ~100 bytes/trace
- Standard: ~500 bytes/trace
- Full: ~5KB/trace (with truncation)
Recommended Usage:
`typescript
// Production: minimal storage
const prodAgent = new Agent({
observability: 'minimal'
});
// Staging: balanced
const stagingAgent = new Agent({
observability: 'standard'
});
// Development: complete debugging
const devAgent = new Agent({
observability: 'full'
});
// Or detect automatically
const agent = new Agent({
observability: process.env.NODE_ENV === 'production' ? 'minimal' : 'full'
});
`
#### Trace Interceptor (onTrace)
Intercept and modify traces before they are sent, useful for:
- Sending to external tools (DataDog, Sentry, CloudWatch)
- Adding custom metadata
- Filtering specific traces
- Audit logging
Example: Send to DataDog
`typescript`
const agent = new Agent({
observability: {
mode: 'full',
onTrace: (trace) => {
// Send to DataDog
datadogTracer.trace({
name: trace.operation,
resource: trace.type,
duration: trace.duration,
meta: trace.metadata
});
// Return trace unchanged to continue normal flow
return trace;
}
}
});
Example: Add Custom Metadata
`typescript`
const agent = new Agent({
observability: {
onTrace: (trace) => {
// Enrich with custom data
trace.metadata.environment = 'production';
trace.metadata.version = '1.0.0';
trace.metadata.region = 'us-east-1';
return trace;
}
}
});
Example: Filter LLM Calls
`typescript`
const agent = new Agent({
observability: {
onTrace: (trace) => {
// Only send LLM calls to external tracker
if (trace.type === 'llm_call') {
externalTracker.send(trace);
}
return trace;
}
}
});
Example: Cancel Sensitive Traces
`typescript`
const agent = new Agent({
observability: {
onTrace: (trace) => {
// Cancel traces with sensitive data
if (trace.metadata?.containsSensitiveData) {
return null; // โ Cancel trace (won't be sent)
}
return trace;
}
}
});
Example: Error Tracking with Sentry
`typescript`
const agent = new Agent({
observability: {
onTrace: (trace) => {
// Send errors to Sentry
if (trace.type === 'error_event' || trace.status === 'error') {
Sentry.captureException(new Error(trace.error || 'Unknown error'), {
extra: {
traceId: trace.traceId,
executionId: trace.executionId,
operation: trace.operation,
metadata: trace.metadata
}
});
}
return trace;
}
}
});
Callback Return Values:
- TraceData: Modified trace (will be sent with changes)null
- : Cancel trace (won't be sent or saved)void/undefined
- : Continue with original trace
---
`typescript
const agent = new Agent({
name: 'Vision Agent',
instructions: 'You can analyze images.',
model: openai('gpt-4o'),
});
await agent.process({
message: 'What is in this image?',
messages: [
{
role: 'user',
content: [
{ type: 'text', text: 'What is in this image?' },
{
type: 'image_url',
image_url: { url: 'https://example.com/image.jpg' },
},
],
},
],
});
`
`typescript
const agent = new Agent({
name: 'Streaming Agent',
instructions: 'You are helpful.',
model: openai('gpt-4o'),
memory: {
maxTurns: 10,
},
streaming: {
enabled: true,
},
});
const stream = await agent.processStream({
message: 'Tell me a story',
sessionId: 'session_123',
});
for await (const chunk of stream) {
if (!chunk.done) {
process.stdout.write(chunk.text);
}
}
`
`typescript
import { Memory, MemoryProvider } from '@runflow-ai/sdk';
class RedisMemoryProvider implements MemoryProvider {
async get(key: string): Promise
const data = await redis.get(key);
return JSON.parse(data);
}
async set(key: string, data: MemoryData): Promise
await redis.set(key, JSON.stringify(data));
}
async append(key: string, message: MemoryMessage): Promise
const data = await this.get(key);
data.messages.push(message);
await this.set(key, data);
}
async clear(key: string): Promise
await redis.del(key);
}
}
// Use custom provider
const memory = new Memory({
provider: new RedisMemoryProvider(),
maxTurns: 10,
});
`
``typescript
const workflow = createWorkflow({
id: 'e-commerce-workflow',
inputSchema: z.object({
customerId: z.string(),
query: z.string(),
}),
outputSchema: z.any(),
})
// 1. Analyze intent
.agent('analyzer', analyzerAgent, {
promptTemplate: 'Analyze customer query: {{input.que