AI Gateway Provider for AI-SDK
npm install ai-gateway-provider
This library provides an AI Gateway Provider for the Vercel AI SDK, enabling you to seamlessly integrate multiple AI models from different providers behind a unified interface. It leverages Cloudflare's AI Gateway to manage and optimize your AI model usage.
* Runtime Agnostic: Works in all JavaScript runtimes supported by the Vercel AI SDK including Node.js, Edge Runtime, and more.
* Automatic Provider Fallback: ✨ Define an array of models and the provider will automatically fallback to the next available provider if one fails, ensuring high availability and resilience for your AI applications.
``bash`
npm install ai-gateway-provider
`typescript
import { createAiGateway } from 'ai-gateway-provider';
import { createOpenAI } from 'ai-gateway-provider/providers/openai';
import { generateText } from "ai";
const aigateway = createAiGateway({
accountId: "{CLOUDFLARE_ACCOUNT_ID}",
gateway: '{GATEWAY_NAME}',
apiKey: '{CF_AIG_TOKEN}', // If your AI Gateway has authentication enabled
});
const openai = createOpenAI({ apiKey: '{OPENAI_API_KEY}' });
const { text } = await generateText({
model: aigateway(openai.chat("gpt-5.1")),
prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});
`
`typescript
import { createAiGateway } from 'ai-gateway-provider';
import { createOpenAI } from 'ai-gateway-provider/providers/openai';
import { generateText } from "ai";
const aigateway = createAiGateway({
accountId: "{CLOUDFLARE_ACCOUNT_ID}",
gateway: '{GATEWAY_NAME}',
apiKey: '{CF_AIG_TOKEN}',
});
const openai = createOpenAI();
const { text } = await generateText({
model: aigateway(openai.chat("gpt-5.1")),
prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});
`
`typescript
import { createAiGateway } from 'ai-gateway-provider';
import { unified, createUnified } from 'ai-gateway-provider/providers/unified';
import { generateText } from "ai";
const aigateway = createAiGateway({
accountId: "{{CLOUDFLARE_ACCOUNT_ID}}",
gateway: '{{GATEWAY_NAME}}',
apiKey: '{{CF_AIG_TOKEN}}',
});
const { text } = await generateText({
model: aigateway(unified("dynamic/customer-support")),
prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});
`
`typescript
// Define multiple provider options with fallback priority
const model = aigateway([
anthropic('claude-3-5-haiku-20241022'), // Primary choice
openai.chat("gpt-4o-mini"), // First fallback
mistral("mistral-large-latest"), // Second fallback
]);
// The system will automatically try the next model if previous ones fail
const {text} = await generateText({
model,
prompt: 'Suggest three names for my tech startup.',
});
`
Binding Benefits:
- Faster Requests: Saves milliseconds by avoiding open internet routing.
- Enhanced Security: Uses a special pre-authenticated pipeline.
- No Cloudflare API Token Required: Authentication is handled by the binding.
`typescript
const aigateway = createAiGateway({
binding: env.AI.gateway('my-gateway'),
options: { // Optional per-request override
skipCache: true
}
});
const openai = createOpenAI({apiKey: 'openai api key'});
const anthropic = createAnthropic({apiKey: 'anthropic api key'});
const model = aigateway([
anthropic('claude-3-5-haiku-20241022'), // Primary choice
openai.chat("gpt-4o-mini"), // Fallback if first fails
]);
const { text } = await generateText({
model: model,
prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});
`
You can now customize AI Gateway settings for each request:
`typescript
const aigateway = createAiGateway({
// ... other configs
options: { // all fields are optional!
cacheKey: 'my-custom-cache-key',
cacheTtl: 3600, // Cache for 1 hour
skipCache: false,
metadata: {
userId: 'user123',
requestType: 'recipe'
},
retries: {
maxAttempts: 3,
retryDelayMs: 1000,
backoff: 'exponential'
}
},
});
`
#### API Key Authentication
* accountId: Your Cloudflare account IDgateway
* : The name of your AI GatewayapiKey
* (Optional): Your Cloudflare API key
#### Cloudflare AI Binding
* binding: Cloudflare AI Gateway bindingoptions
* (Optional): Request-level AI Gateway settings
* cacheKey: Custom cache key for the requestcacheTtl
* : Cache time-to-live in secondsskipCache
* : Bypass caching for the requestmetadata
* : Custom metadata for the requestcollectLog
* : Enable/disable log collectioneventId
* : Custom event identifierrequestTimeoutMs
* : Request timeout in millisecondsretries
* : Retry configurationmaxAttempts
* : Number of retry attempts (1-5)retryDelayMs
* : Delay between retriesbackoff
* : Retry backoff strategy ('constant', 'linear', 'exponential')
* OpenAI
* Anthropic
* DeepSeek
* Google AI Studio
* Grok
* Mistral
* Perplexity AI
* Replicate
* Groq
Currently, the following methods are supported:
* Non-streaming text generation: Using generateText() from the Vercel AI SDKgenerateText()
* Chat completions: Using with message-based prompts
More can be added, please open an issue in the GitHub repository!
The library throws the following custom errors:
* AiGatewayUnauthorizedError: Your AI Gateway has authentication enabled, but a valid API key was not provided.AiGatewayDoesNotExist`: Specified AI Gateway does not exist
*
This project is licensed under the MIT License - see the LICENSE file for details.
* Vercel AI SDK Documentation
* Cloudflare AI Gateway Documentation
* GitHub Repository