Lightweight, composable LLM nodes for TypeScript
npm install llm-nodesA lightweight, composable TypeScript library for working with LLMs using native provider SDKs (Anthropic, OpenAI, Google, AWS Bedrock) with a simple, intuitive API.
``bash`
npm install llm-nodes
This library uses dotenv to load API keys from your environment. Create a .env file in the root of your project with your API keys:
`API Keys for different providers
OPENAI_API_KEY=your_openai_api_key_here
ANTHROPIC_API_KEY=your_anthropic_api_key_here
GROK_API_KEY=your_grok_api_key_here
Features
- Simplified Node Pattern: Combines prompt templates, LLM configuration, and response parsing into a cohesive unit
- Type-Safe: Full TypeScript support with generics for input and output types
- Composable: Easily connect nodes using functional composition
- Provider Agnostic: Support for multiple LLM providers (OpenAI, Anthropic, AWS Bedrock, Google)
- Research Mode Support: Native support for advanced reasoning models (OpenAI o1/o3, Anthropic Claude 3.7+)
- Specialized Nodes: Purpose-built nodes for common tasks like classification, extraction, and RAG
- Flexible Pipelines: Advanced pipeline patterns for complex workflows
- Native SDKs: Built directly on provider SDKs (Anthropic, OpenAI, Google, AWS Bedrock) for optimal performance
- Lightweight: Minimal API with sensible defaults for rapid development
Quick Start
`typescript
import { LLMNode, jsonParser } from "llm-nodes";// Create a simple node for sentiment analysis
const sentimentAnalyzer = new LLMNode<
{ text: string },
{ sentiment: string; score: number }
>({
promptTemplate:
,
llmConfig: {
provider: "openai",
model: "gpt-3.5-turbo",
temperature: 0.3,
},
parser: jsonParser<{ sentiment: string; score: number }>(),
});// Use the node
async function analyzeSentiment(text: string) {
const result = await sentimentAnalyzer.execute({ text });
console.log(result); // { sentiment: "positive", score: 0.8 }
return result;
}
analyzeSentiment("I'm having a fantastic day today!");
`Node Types
$3
The foundation of the library, encapsulating prompt templates, LLM configuration, and response parsing:
`typescript
const summarizer = new LLMNode<{ text: string }, string>({
promptTemplate: "Summarize the following text: {{text}}",
llmConfig: {
provider: "anthropic",
model: "claude-3-opus-20240229",
},
parser: (text) => text,
});
`$3
A simplified node for text generation with the text parser built-in:
`typescript
const textGenerator = new TextNode({
promptTemplate: "Write a short story about {{topic}} in {{style}} style.",
llmConfig: {
provider: "openai",
model: "gpt-4",
},
});// Use it
const story = await textGenerator.execute({
topic: "a robot learning to paint",
style: "magical realism",
});
// Add instructions without creating a new node
const detailedGenerator = textGenerator.withAdditionalPrompt(
"Include vivid sensory details and a surprising twist at the end."
);
`$3
The library includes several specialized node types for common tasks:
- StructuredOutputNode: Enforces output schema with Zod validation
- ClassificationNode: Classifies inputs into predefined categories
- ExtractionNode: Extracts structured fields from text
- ChainNode: Implements multi-step reasoning chains
- RAGNode (Incomplete): Retrieval-augmented generation with document context
`typescript
// Example: Classification node
const categoryClassifier = new ClassificationNode({
categories: ["business", "technology", "health", "entertainment"] as const,
llmConfig: { provider: "openai", model: "gpt-3.5-turbo" },
includeExplanation: true,
});// Example: Extraction node
const contactExtractor = new ExtractionNode({
fields: [
{ name: "name", description: "Full name of the person" },
{ name: "email", description: "Email address", format: "email" },
{ name: "phone", description: "Phone number", required: false },
],
promptTemplate: "Extract contact information from: {{text}}",
llmConfig: { provider: "anthropic", model: "claude-3-sonnet-20240229" },
});
`$3
Non-LLM nodes for pipeline manipulation:
- DataEnricherNode: Injects external data into the pipeline
- MergeNode: Combines outputs from multiple nodes
`typescript
// Inject site map data
const siteMapEnricher = new DataEnricherNode({
enricher: (article, siteMap) => ({ article, siteMap }),
context: {
pages: [
/ site pages /
],
},
});// Merge parallel processing results
const merger = new MergeNode({
merger: ([summary, keyPoints]) => ({ summary, keyPoints }),
});
`Pipeline Patterns
$3
Chain nodes together with the
pipe() method:`typescript
const pipeline = extractor.pipe(enricher).pipe(generator);
const result = await pipeline.execute(input);
`$3
Inject external data into your pipeline:
`typescript
// Create a pipeline with external data
const keyPointExtractor = new ExtractionNode({
/.../
});const siteMapEnricher = new DataEnricherNode({
enricher: (extractionResult, siteMap) => ({
extraction: extractionResult,
siteMap: siteMap,
}),
context: fetchSiteMap, // async function or static data
});
const articleFormatter = new LLMNode({
/.../
});
const pipeline = keyPointExtractor.pipe(siteMapEnricher).pipe(articleFormatter);
`$3
Process data through multiple nodes and merge the results:
`typescript
// Define parallel nodes
const summaryNode = new LLMNode({
/.../
});
const keyPointsNode = new LLMNode({
/.../
});// Merge node to combine results
const mergeNode = new MergeNode({
merger: ([summaryResult, keyPointsResult]) => ({
summary: summaryResult.summary,
keyPoints: keyPointsResult.keyPoints,
}),
});
// Helper to create a parallel pipeline
const parallelPipeline = MergeNode.createPipeline(
[summaryNode, keyPointsNode],
mergeNode
);
// Use the pipeline
const mergedResult = await parallelPipeline({ text: "..." });
`$3
For maximum flexibility, use
execute() directly:`typescript
async function customWorkflow(text) {
// First analysis
const analysis = await analyzer.execute({ text }); // Custom business logic
if (analysis.sentiment === "negative") {
// Handle negative content specially
}
// External data integration
const userData = await fetchUserData();
// Final generation with combined context
return generator.execute({
topic: analysis.mainTopic,
userData,
});
}
`API Reference
$3
The core class that encapsulates an LLM interaction pattern.
#### Constructor Options
`typescript
{
promptTemplate: string | ((input: TInput) => string);
llmConfig: {
provider: string; // 'openai', 'anthropic', 'bedrock', 'genai'
model: string;
temperature?: number;
maxTokens?: number;
enableResearch?: boolean; // Enable research/thinking mode
providerOptions?: {
systemPrompt?: string;
// Provider-specific options
};
// OpenAI research configuration
reasoning?: {
effort: 'low' | 'medium' | 'high';
summary?: 'auto' | 'concise' | 'detailed';
};
// Anthropic/Bedrock thinking configuration
thinking?: {
type: 'enabled';
budget_tokens: number;
};
// AWS Bedrock configuration
awsRegion?: string;
awsAccessKeyId?: string;
awsSecretAccessKey?: string;
awsSessionToken?: string;
};
parser: (rawResponse: string) => TOutput;
}
`#### Methods
-
execute(input: TInput): Promise - Execute the node with input data
- pipe - Connect to another node$3
-
StructuredOutputNode: Schema-validated outputs using Zod
- ClassificationNode: Classification with predefined categories
- ExtractionNode: Field extraction from unstructured text
- ChainNode: Multi-step reasoning chains
- RAGNode (Incomplete): Retrieval-augmented generation$3
-
DataEnricherNode: Inject external data into pipelines
- MergeNode: Combine outputs from multiple nodes$3
-
jsonParser - Parse JSON responses
- jsonFieldParser - Extract a specific field from JSON
- regexParser - Extract data using regex patterns
- labeledFieldsParser - Parse responses with labeled fields
- textParser() - Return raw text responses$3
-
supportsResearchMode(provider: string, model: string): boolean - Check if a model supports research features
- OPENAI_REASONING_MODELS: string[] - List of OpenAI models with reasoning support
- ANTHROPIC_THINKING_MODELS: string[] - List of Anthropic models with thinking supportWeb Tools Support
The library supports Anthropic's web tools for real-time information access:
$3
Enable Claude to autonomously search the web and return information with citations:
`typescript
const searchNode = new TextNode({
promptTemplate: "What are the latest developments in {{topic}}?",
llmConfig: {
provider: "anthropic",
model: "claude-sonnet-4-20250514",
maxTokens: 2048,
webSearch: {
enabled: true,
maxUses: 5, // Optional: limit number of searches
allowedDomains: ["example.com"], // Optional: restrict to specific domains
userLocation: "US", // Optional: for location-specific searches
},
},
});
`$3
Enable Claude to retrieve and analyze content from specific URLs:
`typescript
const fetchNode = new TextNode({
promptTemplate: "Analyze the article at {{url}}",
llmConfig: {
provider: "anthropic",
model: "claude-sonnet-4-20250514",
maxTokens: 4096,
webFetch: {
enabled: true,
maxUses: 10, // Optional: limit number of fetches
allowedDomains: ["docs.example.com"], // Optional: security restriction
citations: {
enabled: true, // Optional: enable source citations
},
},
},
});
`$3
Combine web search and web fetch for comprehensive research workflows:
`typescript
const researchNode = new TextNode({
promptTemplate: "Research {{topic}} and provide a detailed analysis",
llmConfig: {
provider: "anthropic",
model: "claude-sonnet-4-20250514",
maxTokens: 4096,
webSearch: {
enabled: true,
maxUses: 3,
},
webFetch: {
enabled: true,
maxUses: 5,
citations: { enabled: true },
},
},
});// Claude will first search for relevant articles,
// then fetch and analyze the full content
const result = await researchNode.execute({
topic: "quantum computing breakthroughs",
});
`Key Features:
- Web Search: Claude autonomously searches and returns results with citations ($10 per 1,000 searches)
- Web Fetch: Retrieves full content from URLs provided by users or search results
- PDF Support: Web fetch can retrieve and analyze PDF documents
- Security: Web fetch only accesses URLs explicitly provided or from previous search/fetch results
- Usage Tracking: Both
searchCount and fetchCount are tracked in token usageAWS Bedrock Support
The library supports AWS Bedrock for accessing Anthropic Claude models through AWS infrastructure. This is useful for enterprise deployments with existing AWS credentials and compliance requirements.
$3
`typescript
import { TextNode } from "llm-nodes";const bedrockNode = new TextNode({
promptTemplate: "Summarize the following: {{text}}",
llmConfig: {
provider: "bedrock",
model: "anthropic.claude-3-5-sonnet-20241022-v2:0",
maxTokens: 1024,
},
});
const result = await bedrockNode.execute({ text: "..." });
`$3
The Bedrock provider supports multiple authentication methods:
1. Environment Variables (recommended)
`bash
Set in your environment or .env file
AWS_REGION=us-east-1
AWS_ACCESS_KEY_ID=your_access_key
AWS_SECRET_ACCESS_KEY=your_secret_key
`2. AWS Credentials File
The SDK automatically reads from
~/.aws/credentials:`ini
[default]
aws_access_key_id = your_access_key
aws_secret_access_key = your_secret_key
region = us-east-1
`3. IAM Roles
When running on AWS (EC2, Lambda, ECS), the SDK automatically uses the attached IAM role.
4. Explicit Configuration
`typescript
const node = new TextNode({
promptTemplate: "{{prompt}}",
llmConfig: {
provider: "bedrock",
model: "anthropic.claude-3-5-sonnet-20241022-v2:0",
maxTokens: 1024,
awsRegion: "us-west-2",
awsAccessKeyId: "AKIA...",
awsSecretAccessKey: "...",
awsSessionToken: "...", // Optional, for temporary credentials
},
});
`$3
Use Bedrock-style model identifiers:
| Model | Bedrock Model ID |
|-------|------------------|
| Claude 3.5 Sonnet v2 |
anthropic.claude-3-5-sonnet-20241022-v2:0 |
| Claude 3.5 Haiku | anthropic.claude-3-5-haiku-20241022-v1:0 |
| Claude 3 Opus | anthropic.claude-3-opus-20240229-v1:0 |
| Claude 3 Sonnet | anthropic.claude-3-sonnet-20240229-v1:0 |
| Claude 3 Haiku | anthropic.claude-3-haiku-20240307-v1:0 |$3
Bedrock supports Anthropic's extended thinking feature:
`typescript
const thinkingNode = new TextNode({
promptTemplate: "Solve this step by step: {{problem}}",
llmConfig: {
provider: "bedrock",
model: "anthropic.claude-3-5-sonnet-20241022-v2:0",
maxTokens: 4096,
thinking: {
type: "enabled",
budget_tokens: 2000,
},
},
});
`$3
Enable streaming for large responses:
`typescript
const streamingNode = new TextNode({
promptTemplate: "Write a detailed essay about {{topic}}",
llmConfig: {
provider: "bedrock",
model: "anthropic.claude-3-5-sonnet-20241022-v2:0",
maxTokens: 4096,
stream: true,
},
});
`$3
| Feature | Anthropic API | AWS Bedrock |
|---------|---------------|-------------|
| Authentication | API Key | AWS Credentials |
| Web Search | Supported | Not Available |
| Web Fetch | Supported | Not Available |
| Extended Thinking | Supported | Supported |
| Streaming | Supported | Supported |
Token Usage Tracking
The library provides built-in token usage tracking:
`typescript
// Create an LLM node
const textGenerator = new TextNode({
promptTemplate: "Write about {{topic}}",
llmConfig: {
provider: "anthropic",
model: "claude-3-sonnet-20240229",
},
});// Use the node
const result = await textGenerator.execute({ topic: "AI" });
// Get token usage statistics
const usage = textGenerator.getTotalTokenUsage();
console.log(
Input Tokens: ${usage.inputTokens});
console.log(Output Tokens: ${usage.outputTokens});
console.log(Total Tokens: ${usage.totalTokens});
console.log(Search Count: ${usage.searchCount || 0}); // Web search usage
console.log(Fetch Count: ${usage.fetchCount || 0}); // Web fetch usage// Get detailed usage records
const records = textGenerator.getUsageRecords();
`Token tracking also works across pipelines:
`typescript
// Create a pipeline
const pipeline = nodeA.pipe(nodeB).pipe(nodeC);// Execute
const result = await pipeline.execute(input);
// Get token usage for the entire pipeline
const usage = pipeline.getTotalTokenUsage();
`Research Mode Support
The library now includes native support for advanced reasoning and thinking models from OpenAI and Anthropic. These models can perform deeper analysis and show their reasoning process.
$3
OpenAI:
- o1-preview, o1-mini
- o3, o3-mini
- o4-mini
Anthropic:
- claude-3-7-sonnet
- claude-3.7-sonnet
- claude-3-7-sonnet-latest
$3
Enable research mode by setting
enableResearch: true in your LLM configuration:`typescript
import { LLMNode, jsonParser } from "llm-nodes";// OpenAI o3 with reasoning mode
const reasoningNode = new LLMNode({
promptTemplate: "Solve this complex problem: {{problem}}",
llmConfig: {
provider: "openai",
model: "o3-mini",
enableResearch: true,
reasoning: {
effort: "high", // "low" | "medium" | "high"
summary: "detailed", // "auto" | "concise" | "detailed"
},
},
parser: jsonParser(),
});
// Anthropic Claude 3.7 with thinking mode
const thinkingNode = new LLMNode({
promptTemplate: "Analyze this data: {{data}}",
llmConfig: {
provider: "anthropic",
model: "claude-3-7-sonnet-latest",
enableResearch: true,
thinking: {
type: "enabled",
budget_tokens: 2000, // Max tokens for thinking process
},
},
parser: textParser(),
});
`$3
Research modes use additional tokens for reasoning/thinking that are tracked separately:
`typescript
const result = await reasoningNode.execute({ problem: "..." });
const usage = reasoningNode.getTotalTokenUsage();console.log(
Input tokens: ${usage.inputTokens});
console.log(Output tokens: ${usage.outputTokens});
console.log(Research tokens: ${usage.researchTokens}); // Reasoning/thinking tokens
console.log(Total tokens: ${usage.totalTokens});
`$3
The library automatically detects research-capable models:
`typescript
import { supportsResearchMode } from "llm-nodes";// Check if a model supports research features
console.log(supportsResearchMode("openai", "o3-mini")); // true
console.log(supportsResearchMode("openai", "gpt-4")); // false
`$3
You can combine research and regular nodes in pipelines:
`typescript
// First node uses thinking mode for analysis
const analyzeNode = new LLMNode({
promptTemplate: "Analyze: {{input}}",
llmConfig: {
provider: "anthropic",
model: "claude-3-7-sonnet-latest",
enableResearch: true,
thinking: { type: "enabled", budget_tokens: 1500 },
},
parser: jsonParser(),
});// Second node uses regular mode for summarization
const summarizeNode = new TextNode({
promptTemplate: "Summarize: {{analysis}}",
llmConfig: {
provider: "openai",
model: "gpt-4",
},
});
const pipeline = analyzeNode.pipe(summarizeNode);
const result = await pipeline.execute({ input: "..." });
// Get combined token usage
const usage = pipeline.getTotalTokenUsage();
console.log(
Total research tokens: ${usage.researchTokens});
`$3
1. Access Requirements: OpenAI o1/o3 models require special access. Check OpenAI's documentation for availability.
2. Pricing: Research models typically have different pricing due to additional reasoning tokens.
3. Response Time: Research modes take longer as the model "thinks" through problems.
4. Compatibility: The
enableResearch` flag is ignored for models that don't support it.- RAGNode implementation
MIT