AI-powered API multi tool agent with multi-provider support (OpenAI, Anthropic, Ollama)
npm install fluidtools
style="height: 30px; object-fit: contain; filter: drop-shadow(0px 0px 10px rgba(255,105,95,0.6)); transition: 0.25s;">
style="height: 30px; object-fit: contain; filter: drop-shadow(0px 0px 10px rgba(34, 197, 94, 0.65)); transition: 0.25s;">
bash
npm install fluidtools
`
Quick Start
$3
`bash
npx fluidtools ./api.json ./tools.ts
`
Or programmatically:
`typescript
import { postmanToLangChainCode } from "fluidtools";
const collection = JSON.parse(fs.readFileSync("./api.json", "utf-8"));
const code = postmanToLangChainCode(collection);
fs.writeFileSync("./tools.ts", code);
`
$3
`typescript
import express from "express";
import { FluidToolsClient, loadProviderConfigFromEnv } from "fluidtools";
import { generateTools } from "./tools.ts"; // Generated tools
const app = express();
app.use(express.json());
const providerConfig = loadProviderConfigFromEnv();
const fluidClient = new FluidToolsClient(
providerConfig,
generateTools,
"You are a helpful API assistant.",
10, // max tool calls
true // debug mode
);
app.get("/", async (req, res) => {
const { query } = req.query;
const { authorization } = req.headers;
const token = authorization?.split(" ")[1];
const response = await fluidClient.query(query, token);
res.send({ message: response });
});
app.listen(8000);
`
$3
`bash
curl -X GET "http://localhost:8000/?query=Get user details and list their projects" \
-H "Authorization: Bearer YOUR_TOKEN"
`
Architecture
$3
`mermaid
graph TD
A[Postman 2.1 JSON] --> B[CLI Tool
fluidtools]
B --> C[Tool Generation
TypeScript + Zod Schemas]
C --> D[FluidTools Client]
D --> E[Optional Embedding Service
Semantic Tool Selection]
D --> F[System Prompt
Custom Chatbots]
F --> G[LangGraph Agent
Orchestration & Memory]
G --> H[Multi-Provider LLM Support]
H --> I[Multiple Model Support]
I --> J[Multi-Language Support
Babel Integration]
J --> K[Server Integration
Express/Fastify/Koa]
K --> L[API Exposed
REST/WebSocket]
subgraph "🔧 Tool Conversion Pipeline"
A
B
C
end
subgraph "🤖 AI Agent Core"
D
F
G
H
I
J
end
subgraph "🌐 Integration Layer"
K
L
end
subgraph "⚡ Security & Control"
M[Human-in-Loop
Tool Confirmation]
N[Exact Tool Selection
Security Controls]
end
G --> M
M --> N
subgraph "Provider Ecosystem"
O[OpenAI
GPT-4, GPT-3.5]
P[Anthropic
Claude 3.5, Opus]
Q[Ollama
Local Models]
R[Gemini
2.5 Flash, Pro]
S[Nebius
Kimi-K2]
end
I --> O
I --> P
I --> Q
I --> R
I --> S
L --> T[Chatbot UI
Gradio/React/Web]
`
$3
1. Postman Collection Processing
- Parses Postman 2.1 JSON format
- Extracts requests, parameters, bodies, and schemas
- Generates TypeScript tools with automatic Zod validation
2. Tool Generation Engine
- Converts each API endpoint into a LangChain tool
- Handles path variables, query parameters, headers
- Supports all HTTP methods (GET, POST, PUT, DELETE, PATCH)
- Auto-generates meaningful descriptions
3. Multi-Provider LLM Integration
- Unified interface for different AI providers
- Configurable model selection and API keys
- Consistent response formatting
4. LangGraph Orchestration
- Sequential tool execution with memory
- State persistence using checkpointer
- Built-in retry mechanisms and error handling
5. Optional Embedding Layer
- Semantic indexing of tool definitions
- Cosine similarity-based tool selection
- Reduces token usage for large toolsets
6. Server Integration
- Session-based conversation management
- Tool call confirmation system
- Rate limiting and authentication
$3
`
Postman Collection JSON ──────┐
│
CLI Tool (fluidtools) ────────▼
│
TypeScript Tool Code ─────────▼
│
Express/Fastify Server ───────▼
│
FluidTools Client ────────────▼
│
LangGraph Agent ──────────────▼
│
LLM Provider + Tools ─────────▼
│
API Calls + Responses ────────▼
│
User-Friendly Chat Response ──▼
`
Demo 1: Gradio Integration (Public Testing)
Located in ./demo/server/, this demo provides a complete Express server with Gradio UI integration for testing your AI agents:
$3
- Web upload interface for Postman collections
- Real-time chat with your AI agent
- Provider selection (OpenAI, Anthropic, etc.)
- Rate limiting for free tier testing
- Tool confirmation dialogs
- Session management
$3
`bash
cd demo/server
npm install
npm start
`
Backend runs on http://localhost:3000
$3
`bash
cd demo/gradioServer
pip install -r requirements.txt
python app.py
`
Frontend runs on http://localhost:7860 - open this in your browser for the beautiful glassmorphic chat interface with drag-and-drop Postman collection upload and real-time AI chat.
Demo 2: Real-World Integration (Cloud API Example)
Located in ./demo2/backend/, this demo shows a production-ready integration with a cloud provider API:
$3
- Pre-generated tools from Ace Cloud API
- Simplified server setup
- Custom system prompts
- Environment variable configuration
- Tool approval workflows
This demo converts a comprehensive cloud API (instances, volumes, networks, billing, etc.) into AI tools.
$3
`bash
cd demo2/backend
npm install
npm run dev
`
Backend runs on http://localhost:8000
$3
`bash
cd demo2/frontend
npm install
npm run dev
`
Frontend runs on http://localhost:5173 - features a modern React chat interface with:
- 🎤 Voice input/output capabilities (STT/TTS)
- 📱 Responsive design with markdown rendering
- ✅ Tool approval dialogs for sensitive operations
- 🔄 Real-time message streaming
- 🎨 Beautiful UI with copy/retry functionality
- 🔧 Advanced chatbot features
The React app connects to the backend API to provide a complete user experience for interacting with your AI agent.
API Reference
$3
Main class for managing AI agents.
`typescript
new FluidToolsClient(
providerConfig: ProviderConfig,
toolsGenerator: Function,
systemInstructions?: string,
maxToolCalls?: number,
debug?: boolean,
expireAfterSeconds?: number,
confirmationConfig?: ToolConfirmationConfig,
toolsConfig?: Record,
embeddingConfig?: EmbeddingConfig
)
`
$3
- query(query: string, accessToken?: string): Execute natural language query
- clearThread(accessToken?: string): Clear conversation memory
- getPendingConfirmations(accessToken?: string): Check pending tool approvals
- approveToolCall(toolCallId: string, accessToken?: string): Approve pending tool
- rejectToolCall(toolCallId: string, accessToken?: string): Reject pending tool
$3
`typescript
// Environment Variables
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
OLLAMA_BASE_URL=http://localhost:11434
// Or programmatic
const config = {
provider: "openai",
model: "gpt-4",
apiKey: process.env.OPENAI_API_KEY,
temperature: 0.1
};
`
CLI Usage
Generate tools from Postman collection:
`bash
fluidtools [output-file] [--help]
Examples
fluidtools api.json tools.ts
fluidtools ./collections/my-api.json
`
Contributing
1. Fork the repository
2. Create feature branch: git checkout -b feature/amazing-feature
3. Commit changes: git commit -m 'Add amazing feature'
4. Push to branch: git push origin feature/amazing-feature`