Provider-agnostic advanced tool use library for LLMs
npm install llm-advanced-toolsA TypeScript library that brings advanced tool use features to all major LLM providers through Vercel AI SDK (OpenAI, Anthropic, Google, and more).
Benefits:
- Reduces token usage by deferring tool loading
- Improved accuracy with large tool sets
- Scale to hundreds or thousands of tools
- Anthropic reports 85%+ token reduction in their testing
Benefits:
- Keep intermediate results out of LLM context
- Parallel tool execution
- Better control flow with loops, conditionals, data transformations
- Anthropic reports 37%+ token reduction on complex tasks in their testing
Benefits:
- Show proper usage patterns
- Clarify format conventions and optional parameters
- Anthropic reports 18%+ accuracy improvement on complex parameters in their testing
``bash`
npm install llm-advanced-tools
`typescript
import { Client, ToolRegistry, VercelAIAdapter, ToolDefinition } from 'llm-advanced-tools';
import { openai } from '@ai-sdk/openai';
import { anthropic } from '@ai-sdk/anthropic';
// 1. Create a tool registry
const registry = new ToolRegistry({
strategy: 'smart', // 'smart', 'keyword', or 'custom'
maxResults: 5
});
// 2. Define tools with advanced features
const weatherTool: ToolDefinition = {
name: 'get_weather',
description: 'Get current weather for a location',
inputSchema: {
type: 'object',
properties: {
location: { type: 'string', description: 'City name' },
units: {
type: 'string',
enum: ['celsius', 'fahrenheit'],
description: 'Temperature units'
}
},
required: ['location']
},
// Tool Use Examples - improve accuracy
inputExamples: [
{ location: 'San Francisco', units: 'fahrenheit' },
{ location: 'Tokyo', units: 'celsius' }
],
// Defer loading - only load when searched
deferLoading: true,
// Allow programmatic calling
allowedCallers: ['code_execution'],
handler: async (input) => {
// Your implementation
return { temp: 72, conditions: 'Sunny' };
}
};
registry.register(weatherTool);
// 3. Create client with any provider via Vercel AI SDK
// Use with OpenAI GPT-5
const openaiClient = new Client({
adapter: new VercelAIAdapter(openai('gpt-5')),
enableToolSearch: true,
enableProgrammaticCalling: true
}, registry);
// Or use with Anthropic Claude Sonnet 4.5
const claudeClient = new Client({
adapter: new VercelAIAdapter(anthropic('claude-sonnet-4-5')),
enableToolSearch: true,
enableProgrammaticCalling: true
}, registry);
// Or use with Google Gemini
// const geminiClient = new Client({
// adapter: new VercelAIAdapter(google('gemini-2.0-flash-exp')),
// enableToolSearch: true,
// enableProgrammaticCalling: true
// }, registry);
// 4. Chat!
const response = await openaiClient.ask("What's the weather in San Francisco?");
console.log(response);
`
Benefits:
- ā
One Interface: Work with all major providers (OpenAI, Anthropic, Google, Mistral, etc.)
- ā
Easy Switching: Change providers by modifying one line of code
- ā
Latest Models: Support for GPT-5, Claude Sonnet 4.5, Gemini 2.0, and more
- ā
Advanced Features: Tool search, programmatic calling work across all providers
- ā
Type Safety: Full TypeScript support with excellent IDE integration
- ā
AI SDK 6 Ready: Compatible with the latest Vercel AI SDK v6.0
``
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā Your Application ā
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā
ā¼
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā Unified Tool Interface ā
ā ⢠ToolRegistry (search, defer loading) ā
ā ⢠CodeExecutor (programmatic calling) ā
ā ⢠ToolDefinition (with examples) ā
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā
ā¼
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā Vercel AI SDK Adapter ā
ā Supports all Vercel AI SDK providers: ā
ā ⢠OpenAI (GPT-4, GPT-5) ā
ā ⢠Anthropic (Claude 3.5, Claude 4.5) ā
ā ⢠Google (Gemini) ā
ā ⢠Mistral, Groq, Cohere, and more ā
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
For providers without native support, we implement client-side search:
1. Tools marked with deferLoading: true are registered but not loadedtool_search
2. A special tool is automatically added
3. When LLM needs capabilities, it searches using the tool_search tool
4. Only relevant tools are loaded into context
5. Massive token savings (85%+ reduction)
Search Strategies:
- smart: Intelligent relevance ranking using BM25 algorithm (recommended, default)
- keyword: Fast keyword matching for exact terms
- custom: Provide your own search function
For providers without native support, we use sandboxed code execution:
1. Tools marked with allowedCallers: ['code_execution'] can be called from code
2. LLM writes code to orchestrate multiple tool calls
3. Code runs in sandbox (VM, Docker, or cloud service)
4. Only final results enter LLM context, not intermediate data
5. Supports parallel execution, loops, conditionals
Example:
Instead of this (traditional):
``
ā LLM: get_team_members("engineering")
ā API: [20 members...]
ā LLM: get_expenses("emp_1", "Q3")
ā API: [50 line items...]
... 19 more calls ...
ā LLM: Manual analysis of 1000+ line items
You get this (programmatic):
``
ā LLM: Writes code to orchestrate all calls
ā Code runs in sandbox
ā Only final results: [2 people who exceeded budget]
For providers without native support, examples are injected into descriptions:
`typescript
{
name: "create_ticket",
description: "Create a support ticket.
Examples:
1. {\"title\": \"Login broken\", \"priority\": \"critical\", ...}
2. {\"title\": \"Feature request\", \"labels\": [\"enhancement\"]}",
// ...
}
`
The LLM learns proper usage patterns from the examples.
All providers supported through Vercel AI SDK:
| Provider | Tool Search | Code Execution | Examples | Latest Models |
|----------|------------|----------------|----------|---------------|
| OpenAI | ā
(emulated) | ā
(emulated) | ā
(emulated) | GPT-5, GPT-4o |
| Anthropic | ā
(native + emulated) | ā
(native + emulated) | ā
(native + emulated) | Claude Sonnet 4.5 |
| Google | ā
(emulated) | ā
(emulated) | ā
(emulated) | Gemini 2.0 |
| Mistral | ā
(emulated) | ā
(emulated) | ā
(emulated) | Latest |
| Groq | ā
(emulated) | ā
(emulated) | ā
(emulated) | Latest |
| Cohere | ā
(emulated) | ā
(emulated) | ā
(emulated) | Latest |
Note: Anthropic models have native support for these features. For other providers, features are emulated client-side.
`typescript`
const registry = new ToolRegistry({
strategy: 'smart', // 'smart' (default), 'keyword', or 'custom'
maxResults: 10, // Max tools to return per search
threshold: 0.0, // Minimum relevance score (0-100)
customSearchFn: async (query, tools) => {
// Your custom search logic (only needed if strategy is 'custom')
return filteredTools;
}
});
Strategy Guide:
- smart: Best for most cases - understands relevance and contextkeyword
- : Fast exact matching - use when you know exact tool namescustom
- : Advanced - provide your own search algorithm
`typescript`
const client = new Client({
adapter: new VercelAIAdapter(openai('gpt-5')),
enableProgrammaticCalling: true,
executorConfig: {
timeout: 30000, // 30 seconds
memoryLimit: '256mb',
environment: { // Environment variables
NODE_ENV: 'production'
}
}
});
`typescript`
interface ToolDefinition {
name: string;
description: string;
inputSchema: JSONSchema | ZodSchema;
inputExamples?: any[]; // Tool Use Examples
deferLoading?: boolean; // For Tool Search
allowedCallers?: string[]; // For Programmatic Calling
handler: (input: any) => Promise
}
`typescript`
class ToolRegistry {
register(tool: ToolDefinition): void
registerMany(tools: ToolDefinition[]): void
search(query: string, maxResults?: number): Promise
get(name: string): ToolDefinition | undefined
getLoadedTools(): ToolDefinition[]
getStats(): { total: number; loaded: number; deferred: number }
}
`typescript`
class Client {
constructor(config: ClientConfig, registry?: ToolRegistry)
chat(request: ChatRequest): Promise
ask(prompt: string, systemPrompt?: string): Promise
getRegistry(): ToolRegistry
}
Use when:
- Tool definitions consuming >10K tokens
- Experiencing tool selection accuracy issues
- Building MCP-powered systems with multiple servers
- 10+ tools available
Skip when:
- Small tool library (<10 tools)
- All tools used frequently
- Tool definitions are compact
Use when:
- Processing large datasets where you only need aggregates
- Running multi-step workflows with 3+ dependent tool calls
- Filtering, sorting, or transforming tool results
- Handling tasks where intermediate data shouldn't influence reasoning
- Running parallel operations across many items
Skip when:
- Making simple single-tool invocations
- Working on tasks where LLM should see all intermediate results
- Running quick lookups with small responses
Use when:
- Complex nested structures where valid JSON doesn't imply correct usage
- Tools with many optional parameters
- APIs with domain-specific conventions
- Similar tools where examples clarify which to use
Skip when:
- Simple single-parameter tools with obvious usage
- Standard formats (URLs, emails) that LLM already understands
- Validation concerns better handled by JSON Schema
The default VM executor is NOT secure for untrusted code. For production:
1. Docker (recommended for local): Full isolation, requires Docker installed
2. E2B: Cloud sandbox service, easy setup, scalable
3. Modal: Serverless containers
4. Custom: Implement CodeExecutor interface
to inputSchema` (AI SDK 6 requirement)- [x] Core library with Vercel AI SDK adapter
- [x] AI SDK 6 support
- [x] Latest model support (GPT-5, Claude Sonnet 4.5)
- [ ] Docker-based executor
- [ ] E2B integration
- [ ] Streaming support
- [ ] Async tool execution
- [ ] LangChain/LlamaIndex integration
Contributions welcome! Please see CONTRIBUTING.md.
MIT
This library implements features described in Anthropic's blog post:
Introducing advanced tool use on the Claude Developer Platform
The implementation is provider-agnostic and works with any LLM that supports function calling through Vercel AI SDK.