TypeScript-based Web AI Service that creates configurable AI-powered endpoints from YAML definitions
npm install web-ai-serviceA TypeScript-based workflow engine that creates dynamic API endpoints from YAML workflow definitions. Build powerful AI-powered APIs with LLM calls, custom code execution, and data transformationsβall without writing server boilerplate.
- π YAML-Based Configuration - Define API endpoints declaratively
- π€ Multi-LLM Support - Built-in support for Gemini, OpenAI, Anthropic, and Grok
- π Custom Code Nodes - Execute TypeScript functions in your workflows
- β‘ Parallel Execution - Run multiple nodes concurrently with error strategies
- π Data Transformation - Reduce, split, and map data with JSONPath
- β
Input Validation - JSON Schema validation on request inputs
- π― Type-Safe - Full TypeScript support with strict typing
- π Auto-Routing - Endpoint folders automatically become API routes
- π Plugin System - Extensible with Supabase and custom plugins
---
1. Quick Start
2. Project Structure
3. Creating Endpoints
4. Node Types
5. Using Plugins
6. Configuration
7. Commands Reference
8. Troubleshooting
9. Documentation
---
The easiest way to start is with the scaffolder:
``bash`
npx create-web-ai-service
You'll be prompted to:
1. Enter your project name - e.g., my-api
2. Select plugins - Choose from available plugins like Supabase
Or use command-line arguments for non-interactive setup:
`bash`
npx create-web-ai-service my-api --plugins supabase
After scaffolding:
`bash`
cd my-api
cp .env.example .env # Configure your API keys
npm run dev # Start the server
Your API is now running at http://localhost:3000!
Add to an existing project:
`bash`
npm install web-ai-service
`bash`
npm install -g web-ai-service
web-ai-service # Run from any directory with a src/endpoints folder
---
When you create a new project, you'll get this structure:
``
my-api/
βββ src/
β βββ endpoints/ # Your API endpoints
β β βββ hello/ # Example: GET /hello
β β βββ GET.yaml # Workflow definition
β β βββ codes/ # TypeScript code nodes
β β β βββ format-greeting.ts
β β βββ prompts/ # LLM system prompts
β β βββ greeting-system.txt
β β
β βββ plugins/ # Shared code modules
β βββ supabase.ts # (if selected during setup)
β
βββ .env # Your API keys (gitignored)
βββ .env.example # Template for environment variables
βββ package.json
βββ tsconfig.json
| Concept | Description |
|---------|-------------|
| Endpoint | A folder in src/endpoints/ that becomes an API route |POST.yaml
| Workflow | A YAML file (e.g., , GET.yaml) defining the processing pipeline |
| Stage | A sequential step in the workflow containing one or more nodes |
| Node | An individual processing unit (LLM call, code execution, etc.) |
| Folder Path | HTTP Method | API Route |
|-------------|-------------|-----------|
| src/endpoints/hello/GET.yaml | GET | /hello |src/endpoints/summarize/POST.yaml
| | POST | /summarize |src/endpoints/users/profile/GET.yaml
| | GET | /users/profile |
---
Create a POST endpoint at /summarize:
1. Create the folder structure:
`bash`
mkdir -p src/endpoints/summarize/{codes,prompts}
2. Create the system prompt (src/endpoints/summarize/prompts/system.txt):`text`
You are a concise summarization assistant. Summarize the provided text clearly in 2-3 paragraphs.
3. Create the workflow (src/endpoints/summarize/POST.yaml):`yaml
version: "1.0"
stages:
- name: main
nodes:
summarize:
type: llm
input: $input.text
provider: gemini
model: gemini-2.0-flash-lite
temperature: 0.3
maxTokens: 1024
systemMessages:
- file: system.txt
`
4. Test it:
`bash`
curl -X POST http://localhost:3000/summarize \
-H "Content-Type: application/json" \
-d '{"text": "Long text to summarize..."}'
Create a code node to validate inputs before processing:
src/endpoints/summarize/codes/validate.ts:
`typescript
import type { NodeOutput } from '@workflow/types';
interface SummarizeInput {
text?: string;
}
export default async function(input: unknown): Promise
const body = input as SummarizeInput;
if (!body.text || typeof body.text !== 'string') {
throw new Error('Missing required field: text');
}
if (body.text.length < 10) {
throw new Error('Text must be at least 10 characters');
}
return { type: 'string', value: body.text };
}
`
Updated workflow with validation stage:
`yaml
version: "1.0"
stages:
- name: validate
nodes:
check_input:
type: code
input: $input
file: validate.ts
- name: summarize
nodes:
summary:
type: llm
input: validate.check_input # Reference previous node output
provider: gemini
model: gemini-2.0-flash-lite
systemMessages:
- file: system.txt
`
Chain multiple processing stages:
`yaml
version: "1.0"
stages:
- name: extract
nodes:
parse_data:
type: code
input: $input
file: extract-data.ts
- name: analyze
nodes:
analyze_content:
type: llm
input: extract.parse_data
provider: gemini
model: gemini-2.0-flash-lite
systemMessages:
- file: analyzer-prompt.txt
- name: format
nodes:
format_response:
type: code
input: analyze.analyze_content
file: format-output.ts
`
Run multiple LLM calls simultaneously within a stage:
`yaml
stages:
- name: parallel_analysis
nodes:
sentiment:
type: llm
input: $input.text
provider: gemini
model: gemini-2.0-flash-lite
systemMessages:
- file: sentiment-prompt.txt
keywords:
type: llm
input: $input.text
provider: openai
model: gpt-4o-mini
systemMessages:
- file: keywords-prompt.txt
- name: combine
nodes:
merge:
type: reduce
inputs:
- parallel_analysis.sentiment
- parallel_analysis.keywords
mapping:
sentiment: $.0
keywords: $.1
`
---
All nodes share these common properties:
- type (required) - The node type: llm, code, reduce, split, or passthrough$input
- input (required for most) - The input source: , $input.field, stageName.nodeName
Calls an LLM provider with a prompt.
Required Properties:
`yaml`
my_llm_node:
type: llm
input: $input # Input source
provider: gemini # Provider name: gemini | openai | anthropic | grok
model: gemini-2.0-flash-lite # Model identifier
Optional Properties:
`yaml`
temperature: 0.7 # Default: 1.0. Controls randomness (0.0-1.0)
maxTokens: 1024 # Default: provider default. Max output tokens
systemMessages: # System prompts (optional)
- file: prompt.txt # Load from file
cache: true # Enable caching (default: false)
- text: "Direct prompt" # Or use inline text
config: # Provider-specific config (optional)
topP: 0.9
topK: 40
Supported Providers & Models:
| Provider | Example Models | Notes |
|----------|----------------|-------|
| gemini | gemini-2.0-flash-lite, gemini-2.0-flash, gemini-1.5-pro | Fast, cost-effective |openai
| | gpt-4o, gpt-4o-mini, gpt-4-turbo | High quality |anthropic
| | claude-3-5-sonnet-latest, claude-3-haiku-20240307 | Long context |grok
| | grok-2, grok-2-mini | xAI models |
Using LLM References (Alternative):
Define reusable LLM configurations:
`yaml
llm:
my-summarizer:
provider: gemini
model: gemini-2.0-flash-lite
temperature: 0.3
nodes:
summarize:
type: llm
input: $input
llmRef: my-summarizer # Reference the config
systemMessages:
- file: prompt.txt
`
---
Executes a custom TypeScript function.
Required Properties:
`yaml`
my_code_node:
type: code
input: $input
file: my-processor.ts # Relative to endpoint's codes/ folder
TypeScript Function Signature:
Your code file must export a default async function:
`typescript
import type { NodeOutput } from '@workflow/types';
export default async function(input: unknown): Promise
// Your logic here
const processed = / ... /;
return {
type: 'json', // 'string' | 'json' | 'number' | 'boolean' | 'array'
value: processed
};
}
`
Notes:
- The input parameter is the unwrapped value from the previous nodeNodeOutput
- Must return a object with type and value@code-plugins/*
- Can import from for shared codenpm run scan-deps
- Can use any npm packages (run to auto-install)
---
Combines multiple node outputs into a single JSON object.
Required Properties:
`yaml`
merge_results:
type: reduce
inputs: # Array of node references
- stageName.node1
- stageName.node2
mapping: # JSONPath mappings
firstResult: $.0
secondResult: $.1
nested:
data: $.0.someField
How it Works:
- Takes outputs from multiple nodes specified in inputsmapping
- Uses JSONPath expressions in to extract values$.0
- refers to first input, $.1 to second input, etc.{ type: 'json', value: {...} }
- Returns a single object
Example:
If node1 outputs { value: { count: 10 } } and node2 outputs { value: { total: 100 } }:
`yaml`
mapping:
count: $.0.count # Gets 10 from first input
total: $.1.total # Gets 100 from second inputResult: { count: 10, total: 100 }
---
Divides a single output into multiple named outputs.
Required Properties:
`yaml`
split_data:
type: split
input: stageName.nodeName
mapping: # JSONPath expressions for each output
header: $.metadata.header
body: $.content
footer: $.metadata.footer
How it Works:
- Takes a single input (usually JSON)
- Extracts multiple values using JSONPath
- Creates named outputs accessible as nodeId.outputName
Example:
Input: { metadata: { header: 'Title' }, content: 'Body text' }
`yaml`
split_data:
type: split
input: previous.node
mapping:
title: $.metadata.header # Accessible as split_data.title
text: $.content # Accessible as split_data.text
Later nodes can reference:
`yaml`
another_node:
type: code
input: split_data.title # Gets 'Title'
---
Passes input directly to output unchanged (useful for routing).
Required Properties:
`yaml`
forward:
type: passthrough
input: $input
Notes:
- No transformation applied
- Preserves the input type
- Useful for conditional routing or stage organization
---
All nodes (except reduce) use the input property to specify data source:
| Reference Pattern | Description | Example |
|-------------------|-------------|---------|
| $input | Full request body | input: $input |$input.field
| | Specific field from request | input: $input.text |$input.nested.field
| | Nested field access | input: $input.user.name |stageName.nodeName
| | Output from another node | input: extract.parser |nodeName.outputName
| | Split node output | input: splitter.header |
---
Every workflow must follow these rules:
Single-Stage Workflows:
`yaml`
version: "1.0"
stages:
- name: main # Must be named 'main'
nodes:
my_node: # Must have exactly 1 node
type: llm
input: $input # Must use $input
# ... node config ...
Multi-Stage Workflows:
`yaml`
version: "1.0"
stages:
- name: preprocess # First stage: any name
nodes:
validator: # First node must use $input
type: code
input: $input
# ... config ...
- name: process # Middle stage(s): any name, multiple nodes OK
nodes:
analyze:
type: llm
input: preprocess.validator
# ... config ...
extract:
type: code
input: preprocess.validator
# ... config ...
- name: postprocess # Last stage: any name
nodes:
formatter: # Must have exactly 1 node (exit node)
type: code
input: process.analyze
# ... config ...
Rules:
- First stage's first node must use $input or $input.field as input
- Last stage must have exactly 1 node (its output becomes the API response)
- Middle stages can have any number of nodes
- Stage names can be anything (no longer required to be "entry" and "exit")
---
If you selected Supabase during project setup, you can use it in code nodes:
`typescript
import { supabase } from '@code-plugins/supabase.js';
import type { NodeOutput } from '@workflow/types';
export default async function(input: unknown): Promise
const { data, error } = await supabase
.from('articles')
.select('*')
.limit(10);
if (error) {
throw new Error(Database error: ${error.message});`
}
return { type: 'json', value: data };
}
Configure in .env:`bash`
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_ANON_KEY=your-anon-key
SUPABASE_SERVICE_KEY=your-service-key # Optional
Add files to src/plugins/ and import via @code-plugins/*:
`typescript
// src/plugins/my-helper.ts
export function formatDate(date: Date): string {
return date.toISOString().split('T')[0];
}
// In any code node:
import { formatDate } from '@code-plugins/my-helper.js';
`
---
| Variable | Default | Description |
|----------|---------|-------------|
| PORT | 3000 | Server port |LOG_LEVEL
| | info | Logging level (debug, info, warn, error) |LLM_TIMEOUT_MS
| | 30000 | LLM request timeout |
You need at least one provider configured:
| Variable | Provider |
|----------|----------|
| GEMINI_API_KEY | Google Gemini |OPENAI_API_KEY
| | OpenAI |ANTHROPIC_API_KEY
| | Anthropic Claude |GROK_API_KEY
| | xAI Grok |
| Variable | Plugin |
|----------|--------|
| SUPABASE_URL | Supabase |SUPABASE_ANON_KEY
| | Supabase |SUPABASE_SERVICE_KEY
| | Supabase (optional) |
---
| Command | Description |
|---------|-------------|
| npm run dev | Start development server with hot reload |npm run build
| | Compile TypeScript to JavaScript |npm start
| | Start production server |npm run validate
| | Validate all workflows |npm run create-endpoint
| | Scaffold a new endpoint interactively |npm run scan-deps
| | Scan and install code node dependencies |npm run lint
| | Run ESLint |npm run format
| | Format code with Prettier |
---
| Error | Solution |
|-------|----------|
| "Provider not found" | Check provider is valid and API key is set in .env |codes/
| "Code node file not found" | Verify file exists in folder with correct filename |npm run build
| "Cannot find module '@workflow/types'" | Run or restart TypeScript server |LLM_TIMEOUT_MS
| LLM Timeout | Increase in .env or use a faster model |.env
| "SUPABASE_URL required" | Add Supabase credentials to |
---
For more detailed guides, see the docs/` folder:
- Getting Started - Complete setup walkthrough
- Creating Endpoints - Advanced endpoint patterns
- Using Plugins - Plugin configuration and custom plugins
- Configuration Reference - All environment options
ISC