[](https://www.npmjs.com/package/@mcpc-tech/mcp-sampling-ai-provider) [](https://jsr.io/@mcpc/mcp-sampling-ai-provi
npm install @mcpc-tech/mcp-sampling-ai-provider

AI SDK provider that enables MCP servers to use AI models through the
AI SDK. This effectively transforms your MCP server into
an agentic tool that can reason and make decisions, rather than just a
simple connector.
This provider has specific requirements:
1. Must run inside an MCP Server - This is not a standalone AI SDK provider.
It works by forwarding requests to the MCP client.
2. Client must support MCP Sampling - The connected MCP client must
implement the
sampling capability,
or you can implement it yourself (see
Client Sampling
below).
Clients with sampling support:
- ✅ VS Code (with GitHub Copilot), See the
full list for more clients.
- 🔍 Claude Code -
Issue #1785
- 🔍 Cursor - Issue #3023
- 🔍 Zed - Tracking:
Discussion #39761
- 🔍 Gemini CLI -
Issue #10704
- 🔍 OpenAI Codex -
Issue #4929
- 🔧 Or implement your own - Use setupClientSampling() to add sampling to
any MCP client (see example below).
This package lets MCP servers call language models through AI SDK's standard
interface. It implements LanguageModelV2 by forwarding requests to MCP's
sampling capability.
``bashnpm
npm i @mcpc-tech/mcp-sampling-ai-provider
Usage
`typescript
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import {
CallToolRequestSchema,
ListToolsRequestSchema,
} from "@modelcontextprotocol/sdk/types.js";
import { createMCPSamplingProvider } from "@mcpc/mcp-sampling-ai-provider";
import { generateText } from "ai";// Create MCP server with sampling capability
const server = new Server(
{ name: "translator", version: "1.0.0" },
{ capabilities: { tools: {} } },
);
// Register a translation tool that uses AI
server.setRequestHandler(CallToolRequestSchema, async (request) => {
if (request.params.name === "translate") {
// Create provider from the server
const provider = createMCPSamplingProvider({ server });
// Use AI SDK to translate text
const result = await generateText({
model: provider.languageModel({
modelPreferences: { hints: [{ name: "copilot/gpt-4o-mini" }] },
}),
prompt:
Translate to ${request.params.arguments?.target_lang}: ${request.params.arguments?.text},
}); return { content: [{ type: "text", text: result.text }] };
}
});
// Connect and start
const transport = new StdioServerTransport();
await server.connect(transport);
`$3
You can use tools within your MCP server. The tools will be executed by the
server, and the MCP client only handles the LLM calls:
`typescript
import { createMCPSamplingProvider } from "@mcpc/mcp-sampling-ai-provider";
import { generateText, tool } from "ai";
import { z } from "zod";// Create provider from the server
const provider = createMCPSamplingProvider({ server });
// Define AI SDK tools
const tools = {
search: tool({
description: "Search for information",
parameters: z.object({
query: z.string().describe("Search query"),
}),
execute: async ({ query }) => {
// Tool execution happens here in the MCP server
const results = await performSearch(query);
return { results };
},
}),
};
// Use in a tool handler
server.setRequestHandler(CallToolRequestSchema, async (request) => {
if (request.params.name === "research") {
const result = await generateText({
model: provider.languageModel({
modelPreferences: { hints: [{ name: "copilot/gpt-4o" }] },
}),
prompt:
Research: ${request.params.arguments?.topic},
tools, // AI SDK tools are executed in the MCP server
}); return { content: [{ type: "text", text: result.text }] };
}
});
`Note: This is different from client sampling. Here, AI SDK tools are defined
and executed within the MCP server itself. The MCP client only provides the LLM
inference.
See the examples directory for complete working examples:
generate_text_example.ts - Basic text
generation
- stream_text_example.ts - Streaming
responses
- generate_object_example.ts -
Structured outputAPI
$3
Creates an MCP sampling provider.
Parameters:
-
config.server - MCP Server instance with sampling capabilityReturns: Provider with
languageModel(options) method$3
Creates a language model instance.
Parameters:
-
options.modelPreferences - (Optional) Model preferences for this call
- hints - Array of model name hints (e.g., [{ name: "copilot/gpt-4o" }])
- costPriority - 0-1, higher prefers cheaper models
- speedPriority - 0-1, higher prefers faster models
- intelligencePriority - 0-1, higher prefers more capable modelsReturns: LanguageModelV2 compatible with AI SDK
See
MCP Model Preferences
for details.
Client Sampling (for clients without native support)
If your MCP client doesn't support sampling, you can add sampling capability
using
setupClientSampling with model preferences:`typescript
import {
convertAISDKFinishReasonToMCP,
selectModelFromPreferences,
} from "@mcpc/mcp-sampling-ai-provider";setupClientSampling(client, {
handler: async (params) => {
const modelId = selectModelFromPreferences(params.modelPreferences, {
hints: {
"gpt-4o": "openai/gpt-4o",
"gpt-mini": "openai/gpt-4o-mini",
},
priorities: {
speed: "openai/gpt-4o-mini",
intelligence: "openai/gpt-4o",
},
default: "openai/gpt-4o-mini",
});
const result = await generateText({
model: modelId,
messages: params.messages,
});
return {
model: modelId,
role: "assistant",
content: { type: "text", text: result.text },
stopReason: convertAISDKFinishReasonToMCP(result.finishReason),
};
},
});
`$3
When the MCP server requests tool usage through
createMessage, you can convert
MCP tools to AI SDK format using convertMCPToolsToAISDK:`typescript
import {
convertAISDKFinishReasonToMCP,
convertMCPToolsToAISDK,
selectModelFromPreferences,
} from "@mcpc/mcp-sampling-ai-provider";
import { generateText, jsonSchema, tool } from "ai";setupClientSampling(client, {
handler: async (params) => {
const modelId = selectModelFromPreferences(params.modelPreferences, {
default: "openai/gpt-4o-mini",
});
// Convert MCP tools to AI SDK format
const aiTools = convertMCPToolsToAISDK(params.tools, { tool, jsonSchema });
const result = await generateText({
model: modelId,
messages: params.messages,
tools: aiTools, // Tools are for LLM awareness only - execution happens server-side
});
return {
model: modelId,
role: "assistant",
content: { type: "text", text: result.text },
stopReason: convertAISDKFinishReasonToMCP(result.finishReason),
};
},
});
`Note: In client sampling, tools are not executed on the client side. The
client only returns tool-call content blocks, which the MCP server then
executes.
client-sampling-example.ts for a
complete example.How It Works
Simple request flow:
1. AI SDK calls the language model
2. Provider converts to MCP
sampling/createMessage format
3. MCP client handles the sampling request
4. Provider converts response back to AI SDK formatThe MCP client (e.g., VS Code, Claude Desktop) decides which actual model to use
based on
modelPreferences.Limitations
- No token counting: MCP doesn't provide token usage (returns 0)
- No native streaming: MCP sampling doesn't support streaming - we call
doGenerate first, then emit the complete response as stream events
- Tool support in client sampling: When implementing client sampling, use
convertMCPToolsToAISDK()` to convert MCP tools to AI SDK format. Tool- AI SDK
- MCP Specification
- MCPC Framework
MIT