AI SDK v6 provider for Google Gemini CLI agentic mode
npm install ai-sdk-provider-gemini-cli-agentic

AI SDK v6 provider for Google Gemini CLI agentic mode.
This provider spawns gemini as a subprocess with --output-format stream-json, enabling full agentic capabilities like file system access, code editing, and tool execution through the AI SDK interface.
- Full Agentic Support: Access all Gemini CLI tools (file system, code editing, shell commands, etc.)
- Streaming: Real-time streaming of text and tool calls
- Tool Streaming: Watch tool calls and results as they happen
- Approval Modes: Control tool approval behavior (default, auto_edit, yolo)
- Sandbox Mode: Run in a sandboxed environment for safety
- Session Resume: Resume previous sessions
- MCP Server Support: Integrate with MCP servers
- Custom Logging: Built-in logging with customization support
- AI SDK v6 Compatible: Works with generateText, streamText, and streamObject
- Gemini CLI installed and authenticated (or use allowNpx: true)
- Node.js 18+
- AI SDK v6+
``bash`
npm install ai-sdk-provider-gemini-cli-agentic ai
Option 1: Install globally (recommended for frequent use):
`bash`
npm install -g @google/gemini-cli
Option 2: Use allowNpx: true to run via npx (no global install needed):
`javascript`
geminiCli('auto', { allowNpx: true })
Then authenticate:
`bash`
gemini auth login
`javascript
import { geminiCli } from 'ai-sdk-provider-gemini-cli-agentic';
import { generateText, streamText } from 'ai';
// Basic usage
const { text } = await generateText({
model: geminiCli('gemini-2.5-flash'),
prompt: 'List files in the current directory',
});
// Streaming
const result = streamText({
model: geminiCli('auto', { cwd: process.cwd() }),
prompt: 'Explain the structure of this project',
});
for await (const chunk of result.textStream) {
process.stdout.write(chunk);
}
`
`javascript
import { createGeminiCli } from 'ai-sdk-provider-gemini-cli-agentic';
const provider = createGeminiCli({
defaultSettings: {
geminiPath: '/usr/local/bin/gemini', // Custom CLI path
cwd: '/path/to/project', // Working directory
approvalMode: 'auto_edit', // Approval mode
sandbox: true, // Enable sandbox
verbose: true, // Verbose output
},
});
const model = provider('gemini-2.5-flash');
`
`javascript`
const model = geminiCli('gemini-2.5-flash', {
cwd: process.cwd(),
approvalMode: 'yolo',
sandbox: false,
includeDirectories: ['../shared-lib'],
allowedTools: ['read_file', 'write_file', 'list_directory'],
allowedMcpServerNames: ['filesystem'],
resume: 'latest', // or session index number
env: {
MY_VAR: 'value',
},
});
| Setting | Type | Description |
|---------|------|-------------|
| geminiPath | string | Path to Gemini CLI executable (default: 'gemini') |allowNpx
| | boolean | Allow falling back to npx @google/gemini-cli if CLI not found |cwd
| | string | Working directory for CLI operations |approvalMode
| | 'default' \| 'auto_edit' \| 'yolo' | Tool approval behavior |yolo
| | boolean | Auto-approve all operations (alias for approvalMode: 'yolo') |sandbox
| | boolean | Enable sandbox mode |includeDirectories
| | string[] | Additional directories to include |allowedTools
| | string[] | Tools allowed without confirmation |allowedMcpServerNames
| | string[] | Allowed MCP server names |resume
| | string \| boolean | Resume session ('latest', index, or true) |model
| | string | Override model name |env
| | Record | Environment variables |verbose
| | boolean | Enable verbose logging |logger
| | Logger \| false | Custom logger or false to disable |
| Mode | Description |
|------|-------------|
| default | Prompt for approval on each tool operation |auto_edit
| | Auto-approve file editing tools |yolo
| | Auto-approve all tools (use with caution) |
`javascript
// Conservative (default)
geminiCli('auto', { approvalMode: 'default' })
// Auto-approve edits only
geminiCli('auto', { approvalMode: 'auto_edit' })
// Full automation (dangerous!)
geminiCli('auto', { approvalMode: 'yolo' })
// or
geminiCli('auto', { yolo: true })
`
`javascript
import { geminiCli } from 'ai-sdk-provider-gemini-cli-agentic';
import { streamText } from 'ai';
const result = streamText({
model: geminiCli('auto', { cwd: process.cwd() }),
prompt: 'Read package.json and explain the dependencies',
});
for await (const part of result.fullStream) {
switch (part.type) {
case 'text-delta':
process.stdout.write(part.textDelta);
break;
case 'tool-call':
console.log(\n🔧 Tool: ${part.toolName}); Args: ${JSON.stringify(part.args)}
console.log(); Result: ${part.result?.slice(0, 100)}...
break;
case 'tool-result':
console.log();`
break;
}
}
`javascript`
const model = geminiCli('auto', {
verbose: true, // Enable info-level logging
});
`javascript`
const model = geminiCli('auto', {
logger: {
debug: (msg) => console.debug('[DEBUG]', msg),
info: (msg) => console.info('[INFO]', msg),
warn: (msg) => console.warn('[WARN]', msg),
error: (msg) => console.error('[ERROR]', msg),
},
});
`javascript`
const model = geminiCli('auto', {
logger: false,
});
`javascript
import { geminiCli, isAuthenticationError } from 'ai-sdk-provider-gemini-cli-agentic';
import { generateText } from 'ai';
try {
const { text } = await generateText({
model: geminiCli('auto'),
prompt: 'Hello',
});
} catch (error) {
if (isAuthenticationError(error)) {
console.error('Please run: gemini auth login');
} else {
console.error('Error:', error.message);
}
}
`
The model ID is passed directly to Gemini CLI's -m flag:
`javascript
// Use auto model selection
geminiCli('auto')
// Specific models
geminiCli('gemini-2.5-flash')
geminiCli('gemini-2.5-pro')
geminiCli('gemini-3')
`
See the examples/ directory for more usage examples:
- basic-usage.mjs - Simple text generationstreaming.mjs
- - Streaming responsesstreaming-tool-calls.mjs
- - Watching tool executionconversation-history.mjs
- - Multi-turn conversationscustom-config.mjs
- - Advanced configurationpermissions-and-sandbox.mjs
- - Approval modes and sandboxerror-handling.mjs
- - Error handling patternslogging-*.mjs
- - Various logging configurations
Run examples:
`bash`
cd examples
node basic-usage.mjs
- Interactive Prompts: The provider cannot handle Gemini CLI's interactive approval prompts. Use approvalMode: 'yolo' or approvalMode: 'auto_edit' for automation.embeddingModel
- No Image Input: Image/multimodal input is not supported (text only).
- No Embedding/Image Models: Only language model is supported ( and imageModel throw errors).
- Subprocess Overhead: Each call spawns a new subprocess; not suitable for high-frequency requests.
| Provider Setting | CLI Flag |
|-----------------|----------|
| cwd | Process working directory |approvalMode
| | --approval-mode |yolo
| | -y / --yolo |sandbox
| | -s / --sandbox |includeDirectories
| | --include-directories |allowedTools
| | --allowed-tools |allowedMcpServerNames
| | --allowed-mcp-server-names |resume
| | -r / --resume |model
| | -m / --model |--output-format stream-json` |
| (always set) |
MIT
- AI SDK - The AI SDK by Vercel
- Gemini CLI - Google's Gemini CLI
- ai-sdk-provider-codex-cli - Similar provider for OpenAI Codex CLI