Lightweight LLM agentic framework for TypeScript
npm install @treppenhaus/chisatoAgentLoop.run and Agent.chat now accept string or Message[] history
sendAgenticMessage and sendMessage
user_output and query_llm
typescript
import { Agent, ILLMProvider, IAction, Message } from "chisato";
// 1. Implement your LLM provider
class MyLLMProvider implements ILLMProvider {
async sendAgenticMessage(
messages: Message[],
systemPrompt?: string
): Promise {
// For agentic calls (with actions)
return await callYourLLM(messages, systemPrompt);
}
async sendMessage(
messages: Message[],
systemPrompt?: string
): Promise {
// For normal chat (no actions)
return await callYourLLM(messages, systemPrompt);
}
}
// 2. Create custom actions
class CalculatorAction implements IAction {
name = "calculator";
description = "Perform mathematical calculations";
parameters = [
{
name: "expression",
type: "string" as const,
description: "Mathematical expression to evaluate",
required: true,
},
];
async execute(params: Record): Promise {
const result = eval(params.expression);
return { result };
}
}
// 3. Create and configure your agent
const provider = new MyLLMProvider();
const agent = new Agent(provider);
// 4. Register actions
agent.registerAction(new CalculatorAction());
// 5. Start chatting!
const response = await agent.chat("What is 15 * 23?");
console.log(response);
`
Documentation
$3
- ILLMProvider Guide - Understanding the two LLM methods
- Using Real LLMs - Integration with OpenAI, Anthropic, Ollama
- Retry System Guide - Handling failures and retries
- AgentLoop Design - How autonomous task execution works
- Quick Reference - Quick API reference
- Architecture - System architecture overview
- Implementation Summary - Implementation details
$3
Two Types of LLM Calls:
1. Agentic Messages (sendAgenticMessage): Used when the LLM should have access to actions and can decide whether to use them
2. Normal Messages (sendMessage): Used for simple chat without action capabilities
Retry System:
- Automatic retry for LLM failures (empty responses, malformed JSON, API errors)
- Automatic retry for action execution failures
- Configurable retry limits and backoff strategies
- Callbacks for monitoring and alerting
Core Concepts
$3
1. You send a message to the agent using agent.chat()
2. The agent builds a system prompt that includes descriptions of all registered actions
3. The LLM responds, potentially including action calls in JSON format
4. The agent parses the response and automatically executes any requested actions
5. Action results are fed back to the LLM
6. Steps 3-5 repeat until the LLM provides a final response without actions
$3
Implement the IAction interface:
`typescript
import { IAction } from "chisato";
class WeatherAction implements IAction {
name = "get_weather";
description = "Get current weather for a location";
parameters = [
{
name: "location",
type: "string" as const,
description: "City name or coordinates",
required: true,
},
];
async execute(params: Record): Promise {
const weather = await fetchWeatherAPI(params.location);
return {
temperature: weather.temp,
condition: weather.condition,
};
}
}
agent.registerAction(new WeatherAction());
`
$3
The AgentLoop class enables autonomous task breakdown and execution:
`typescript
import { AgentLoop } from "chisato";
const agentLoop = new AgentLoop(provider, {
includeDefaultActions: true,
maxSteps: 20,
maxRetries: 3,
maxActionRetries: 2,
onUserOutput: (message) => console.log("Agent:", message),
onActionExecuted: (action) => console.log("Executed:", action.actionName),
onActionRetry: (name, attempt, error) =>
console.log(Retry ${name}: ${attempt}),
});
// Register custom actions
agentLoop.registerAction(new SearchAction());
agentLoop.registerAction(new WeatherAction());
// Run a complex task - the LLM decides which actions to use
const result = await agentLoop.run(
"Search for TypeScript tutorials and summarize"
);
// OR inject history/context
const resultWithContext = await agentLoop.run([
{ role: 'user', content: 'Context: Current location is Berlin.' },
{ role: 'assistant', content: 'Understood.' },
{ role: 'user', content: 'What is the weather like?' }
]);
`
$3
`typescript
const agentLoop = new AgentLoop(provider, {
// LLM retry options
maxRetries: 3, // Retry LLM calls up to 3 times
onInvalidOutput: (attempt, error, output) => {
console.log(LLM retry ${attempt}: ${error});
},
// Action retry options
maxActionRetries: 2, // Retry each action up to 2 times
onActionRetry: (actionName, attempt, error) => {
console.log(Action ${actionName} retry ${attempt}: ${error});
},
onActionMaxRetries: (actionName, error) => {
console.error(Action ${actionName} failed permanently: ${error});
},
});
`
Examples
See the examples directory for complete working examples:
- basic-usage.ts - Simple agent with a calculator action
- custom-provider.ts - Example LLM provider implementation
- agent-loop-example.ts - Comprehensive AgentLoop examples
- action-retry-example.ts - Demonstrating retry functionality
API Reference
$3
Main agent class for building conversational agents.
Constructor:
`typescript
new Agent(provider: ILLMProvider, options?: AgentOptions)
`
Methods:
- registerAction(action: IAction): void - Register an action
- chat(input: string | Message[]): Promise - Send a message or history and get a response
- getHistory(): Message[] - Get conversation history
- clearHistory(): void - Clear conversation history
Options:
`typescript
interface AgentOptions {
maxIterations?: number; // Maximum conversation loops (default: 10)
systemPromptPrefix?: string; // Custom system prompt prefix
maxRetries?: number; // Max LLM retries (default: 3)
maxActionRetries?: number; // Max action retries (default: 2)
onInvalidOutput?: (attempt: number, error: string, output: string) => void;
onActionRetry?: (actionName: string, attempt: number, error: string) => void;
onActionMaxRetries?: (actionName: string, error: string) => void;
}
`
$3
Main class for autonomous task execution with automatic action recognition.
Constructor:
`typescript
new AgentLoop(provider: ILLMProvider, options?: AgentLoopOptions)
`
Methods:
- registerAction(action: IAction): void - Register an action
- run(task: string | Message[]): Promise - Execute a task or process history
- getHistory(): Message[] - Get conversation history
- getUserOutputs(): string[] - Get all user outputs
- getActionsExecuted(): ActionExecution[] - Get all executed actions
Options:
`typescript
interface AgentLoopOptions {
maxSteps?: number; // Maximum steps (default: 10)
includeDefaultActions?: boolean; // Include user_output and query_llm (default: true)
systemPrompt?: string; // Custom system prompt
maxRetries?: number; // Max LLM retries (default: 3)
maxActionRetries?: number; // Max action retries (default: 2)
onStepComplete?: (step: AgentStep) => void;
onUserOutput?: (message: string) => void;
onActionExecuted?: (execution: ActionExecution) => void;
onInvalidOutput?: (attempt: number, error: string, output: string) => void;
onActionRetry?: (actionName: string, attempt: number, error: string) => void;
onActionMaxRetries?: (actionName: string, error: string) => void;
}
`
$3
Interface for LLM providers.
Methods:
- sendAgenticMessage(messages: Message[], systemPrompt?: string): Promise - For agentic calls
- sendMessage(messages: Message[], systemPrompt?: string): Promise - For normal chat
See ILLMPROVIDER_GUIDE.md for detailed information.
$3
Interface for actions.
Properties:
- name: string - Unique action name
- description: string - What the action does
- parameters: ParameterDefinition[] - Parameter definitions
Methods:
- execute(params: Record