Runtime adapter that bridges agent frameworks (OpenAI SDK, LangChain) with production infrastructure
npm install ai-agent-runtimeRuntime adapter that bridges agent frameworks with production infrastructure. This package is the glue between your chosen agent framework (OpenAI SDK, LangChain, etc.) and deployment targets.
This runtime acts as an adapter layer that:
- Wraps different agent frameworks in a unified interface
- Handles framework-specific quirks and requirements
- Provides consistent deployment patterns across frameworks
- Enables hot-swapping between frameworks without changing deployment code
```
Agent Frameworks (OpenAI SDK, LangChain, etc.)
↓
ai-agent-runtime (This Package)
↓
Production Infrastructure (Lambda, Docker, etc.)
The runtime ensures that regardless of which agent framework you use, deployment and management remain consistent.
`bash`
npm install @ai-agent-platform/runtime
`typescript
import { createAgentRuntime, loadManifestFromFile } from '@ai-agent-platform/runtime';
// Load agent configuration
const manifest = await loadManifestFromFile('./agent.yaml');
// Create runtime with custom tools
const runtime = await createAgentRuntime(manifest, customTools, {
verbose: true,
port: 3000
});
// Chat with the agent
const response = await runtime.chat('Hello, how can you help me?');
console.log(response);
`
`yaml`
name: my-agent
version: 1.0.0
description: A helpful AI assistant
model: gpt-5
instructions: |
You are a helpful AI assistant with access to various tools.
Always be helpful and accurate.
temperature: 0.7
mcpServers:
- name: filesystem
url: npx @modelcontextprotocol/server-filesystem
required: true
requiredEnvVars:
- OPENAI_API_KEY
tags:
- assistant
- helpful
Main class for running standalone agents:
`typescript
import { AgentRuntime } from '@ai-agent-platform/runtime';
const runtime = new AgentRuntime({
model: 'gpt-5',
temperature: 0.7,
verbose: true
});
await runtime.initialize(manifest, customTools);
`
Define agent-specific tools:
`typescript
import { z } from 'zod';
import type { ToolDefinition } from '@ai-agent-platform/runtime';
const customTools: ToolDefinition[] = [
{
name: 'weather_check',
description: 'Check current weather for a location',
parameters: z.object({
location: z.string().describe('City name or coordinates'),
}),
execute: async ({ location }) => {
// Your weather API integration
return { temperature: 72, condition: 'sunny' };
},
},
];
`
Create a REST API for your agent:
`typescript
import express from 'express';
import { createAgentRuntime } from '@ai-agent-platform/runtime';
const app = express();
app.use(express.json());
const runtime = await createAgentRuntime(manifest, customTools);
app.post('/chat', async (req, res) => {
const { message } = req.body;
const response = await runtime.chat(message);
res.json({ response });
});
app.listen(3000);
`
The runtime includes these built-in tools:
- calculator: Mathematical calculations
- read_file: Read file contents
- write_file: Write to files
- list_directory: List directory contents
- shell: Execute shell commands
- web_search: Web search (mock implementation)
Connect to Model Context Protocol servers:
`yaml`In agent.yaml
mcpServers:
- name: filesystem
url: npx @modelcontextprotocol/server-filesystem /path/to/allowed/dir
required: true
- name: database
url: node ./custom-mcp-server.js
env:
DB_CONNECTION: "postgresql://..."
required: false
`typescript`
import { startInteractiveMode } from './interactive.js';
await startInteractiveMode(runtime);
`typescript`
import { startServer } from './server.js';
await startServer(runtime, 3000);
`typescript`
export const handler = async (event, context) => {
const runtime = await createAgentRuntime(manifest, customTools);
const response = await runtime.chat(event.message);
return { statusCode: 200, body: JSON.stringify({ response }) };
};
`dockerfile`
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
RUN npm run build
EXPOSE 3000
CMD ["npm", "start"]
`typescript`
class AgentRuntime {
constructor(options: RuntimeOptions);
// Initialize with manifest and custom tools
async initialize(manifest: AgentManifest, customTools?: ToolDefinition[]): Promise
// Single message chat
async chat(message: string): Promise
// Streaming chat
async chatStream(message: string, onChunk?: (text: string) => void): Promise
// Conversation management
getHistory(): Message[];
clearHistory(): void;
// Add tools after initialization
addTools(tools: ToolDefinition[]): void;
// Clean up resources
async cleanup(): Promise
}
`typescript
// Load and validate manifest
async function loadManifestFromFile(filePath: string): Promise
function validateManifest(data: any): AgentManifest;
// Environment variable checking
function checkRequiredEnvVars(requiredVars: string[]): { missing: string[]; present: string[] };
// Convenience function
async function createAgentRuntime(
manifest: AgentManifest,
customTools?: ToolDefinition[],
options?: RuntimeOptions
): Promise
`
Required environment variables:
`bash
OPENAI_API_KEY=your-openai-api-key
Error Handling
`typescript
try {
const runtime = await createAgentRuntime(manifest, customTools);
const response = await runtime.chat(message);
} catch (error) {
if (error.message.includes('Manifest validation failed')) {
// Handle validation errors
} else if (error.message.includes('OPENAI_API_KEY')) {
// Handle missing API key
} else {
// Handle other errors
}
}
`Version Updates
When the platform team releases new runtime versions:
`bash
Update to latest version
npm update @ai-agent-platform/runtimeUpdate to specific version
npm install @ai-agent-platform/runtime@1.2.0Rebuild and redeploy
npm run build
npm run deploy
`TypeScript Support
The package includes full TypeScript definitions:
`typescript
import type {
AgentManifest,
ToolDefinition,
RuntimeOptions,
Message,
ChatSession
} from '@ai-agent-platform/runtime';
``See the examples directory for complete implementation examples:
- Basic Agent: Simple chat agent
- HTTP Server: REST API server
- Lambda Function: Serverless deployment
- Custom Tools: Advanced tool integration
- MCP Integration: External service connections
This package is part of the AI Agent Platform. See the main repository for contribution guidelines.
MIT License - see LICENSE file for details.
---
For more information, visit the AI Agent Platform documentation.