MCP server for AI CLI tools (Claude, Codex, and Gemini) with background process management
npm install ai-cli-mcp

> 📦 Package Migration Notice: This package was formerly @mkxultra/claude-code-mcp and has been renamed to ai-cli-mcp to reflect its expanded support for multiple AI CLI tools.
An MCP (Model Context Protocol) server that allows running AI CLI tools (Claude, Codex, and Gemini) in background processes with automatic permission handling.
Did you notice that Cursor sometimes struggles with complex, multi-step edits or operations? This server, with its powerful unified run tool, enables multiple AI agents to handle your coding tasks more effectively.

This MCP server provides tools that can be used by LLMs to interact with AI CLI tools. When integrated with MCP clients, it allows LLMs to:
- Run Claude CLI with all permissions bypassed (using --dangerously-skip-permissions)
- Execute Codex CLI with automatic approval mode (using --full-auto)
- Execute Gemini CLI with automatic approval mode (using -y)
- Support multiple AI models: Claude (sonnet, opus, haiku), Codex (gpt-5.2-codex, gpt-5.1-codex-mini, gpt-5.1-codex-max, gpt-5.2, gpt-5.1, gpt-5.1-codex, gpt-5-codex, gpt-5-codex-mini, gpt-5), and Gemini (gemini-2.5-pro, gemini-2.5-flash, gemini-3-pro-preview, gemini-3-flash-preview)
- Manage background processes with PID tracking
- Parse and return structured outputs from both tools
You can instruct your main agent to run multiple tasks in parallel like this:
> Launch agents for the following 3 tasks using acm mcp run:
> 1. Refactor src/backend code using sonnet
> 2. Create unit tests for src/frontend using gpt-5.2-codex
> 3. Update docs in docs/ using gemini-2.5-pro
>
> While they run, please update the TODO list. Once done, use the wait tool to wait for all completions and report the results together.
You can reuse heavy context (like large codebases) using session IDs to save costs while running multiple tasks.
> 1. First, use acm mcp run with opus to read all files in src/ and understand the project structure.
> 2. Use the wait tool to wait for completion and retrieve the session_id from the result.
> 3. Using that session_id, run the following two tasks in parallel with acm mcp run:
> - Create refactoring proposals for src/utils using sonnet
> - Add architecture documentation to README.md using gpt-5.2-codex
> 4. Finally, wait again to combine both results.
- True Async Multitasking: Agent execution happens in the background, returning control immediately. The calling AI can proceed with the next task or invoke another agent without waiting for completion.
- CLI in CLI (Agent in Agent): Directly invoke powerful CLI tools like Claude Code or Codex from any MCP-supported IDE or CLI. This enables broader, more complex system operations and automation beyond host environment limitations.
- Freedom from Model/Provider Constraints: Freely select and combine the "strongest" or "most cost-effective" models from Claude, Codex (GPT), and Gemini without being tied to a specific ecosystem.
The only prerequisite is that the AI CLI tools you want to use are locally installed and correctly configured.
- Claude Code: claude doctor passes, and execution with --dangerously-skip-permissions is approved (you must run it manually once to login and accept terms).
- Codex CLI (Optional): Installed and initial setup (login etc.) completed.
- Gemini CLI (Optional): Installed and initial setup (login etc.) completed.
The recommended way to use this server is by installing it by using npx.
``json`
"ai-cli-mcp": {
"command": "npx",
"args": [
"-y",
"ai-cli-mcp@latest"
]
},
`bash`
claude mcp add ai-cli '{"name":"ai-cli","command":"npx","args":["-y","ai-cli-mcp@latest"]}'
Before the MCP server can use Claude, you must first run the Claude CLI manually once with the --dangerously-skip-permissions flag, login and accept the terms.
`bash`
npm install -g @anthropic-ai/claude-code
claude --dangerously-skip-permissions
Follow the prompts to accept. Once this is done, the MCP server will be able to use the flag non-interactively.
For Codex, ensure you're logged in and have accepted any necessary terms:
`bash`
codex login
For Gemini, ensure you're logged in and have configured your credentials:
`bash`
gemini auth login
macOS might ask for folder permissions the first time any of these tools run. If the first run fails, subsequent runs should work.
After setting up the server, add the configuration to your MCP client's settings file (e.g., mcp.json for Cursor, mcp_config.json for Windsurf).
If the file doesn't exist, create it and add the ai-cli-mcp configuration.
This server exposes the following tools:
Executes a prompt using Claude CLI, Codex CLI, or Gemini CLI. The appropriate CLI is automatically selected based on the model name.
Arguments:
- prompt (string, optional): The prompt to send to the AI agent. Either prompt or prompt_file is required.prompt_file
- (string, optional): Path to a file containing the prompt. Either prompt or prompt_file is required. Can be absolute path or relative to workFolder.workFolder
- (string, required): The working directory for the CLI execution. Must be an absolute path.claude-ultra
Models:
- Ultra Aliases: , codex-ultra (defaults to high-reasoning), gemini-ultrasonnet
- Claude: , opus, haikugpt-5.2-codex
- Codex: , gpt-5.1-codex-mini, gpt-5.1-codex-max, gpt-5.2, gpt-5.1, gpt-5gemini-2.5-pro
- Gemini: , gemini-2.5-flash, gemini-3-pro-preview, gemini-3-flash-previewreasoning_effort
- (string, optional): Codex only. Sets model_reasoning_effort (allowed: "low", "medium", "high").session_id
- (string, optional): Optional session ID to resume a previous session. Supported for: haiku, sonnet, opus, gemini-2.5-pro, gemini-2.5-flash, gemini-3-pro-preview, gemini-3-flash-preview.
Waits for multiple AI agent processes to complete and returns their combined results. Blocks until all specified PIDs finish or a timeout occurs.
Arguments:
- pids (array of numbers, required): List of process IDs to wait for (returned by the run tool).timeout
- (number, optional): Maximum wait time in seconds. Defaults to 180 (3 minutes).
Lists all running and completed AI agent processes with their status, PID, and basic info.
Gets the current output and status of an AI agent process by PID.
Arguments:
- pid (number, required): The process ID returned by the run tool.
Terminates a running AI agent process by PID.
Arguments:
- pid (number, required): The process ID to terminate.
Here are some visual examples of the server in action:



Here's an example of using the Claude Code MCP tool to interactively fix an ESLint setup by deleting old configuration files and creating a new one:

Here's an example of the Claude Code tool listing files in a directory:

This server, through its unified run tool, unlocks a wide range of powerful capabilities by giving your AI direct access to both Claude and Codex CLI tools. Here are some examples of what you can achieve:
1. Code Generation, Analysis & Refactoring:
- "Generate a Python script to parse CSV data and output JSON.""Analyze my_script.py for potential bugs and suggest improvements."
-
2. File System Operations (Create, Read, Edit, Manage):
- Creating Files: "Your work folder is /Users/steipete/my_project\n\nCreate a new file named 'config.yml' in the 'app/settings' directory with the following content:\nport: 8080\ndatabase: main_db""Your work folder is /Users/steipete/my_project\n\nEdit file 'public/css/style.css': Add a new CSS rule at the end to make all 'h2' elements have a 'color: navy'."
- Editing Files: "Your work folder is /Users/steipete/my_project\n\nMove the file 'report.docx' from the 'drafts' folder to the 'final_reports' folder and rename it to 'Q1_Report_Final.docx'."
- Moving/Copying/Deleting:
3. Version Control (Git):
- "Your work folder is /Users/steipete/my_project\n\n1. Stage the file 'src/main.java'.\n2. Commit the changes with the message 'feat: Implement user authentication'.\n3. Push the commit to the 'develop' branch on origin."
4. Running Terminal Commands:
- "Your work folder is /Users/steipete/my_project/frontend\n\nRun the command 'npm run build'.""Open the URL https://developer.mozilla.org in my default web browser."
-
5. Web Search & Summarization:
- "Search the web for 'benefits of server-side rendering' and provide a concise summary."
6. Complex Multi-Step Workflows:
- Automate version bumps, update changelogs, and tag releases: "Your work folder is /Users/steipete/my_project\n\nFollow these steps: 1. Update the version in package.json to 2.5.0. 2. Add a new section to CHANGELOG.md for version 2.5.0 with the heading '### Added' and list 'New feature X'. 3. Stage package.json and CHANGELOG.md. 4. Commit with message 'release: version 2.5.0'. 5. Push the commit. 6. Create and push a git tag v2.5.0."

7. Repairing Files with Syntax Errors:
- "Your work folder is /path/to/project\n\nThe file 'src/utils/parser.js' has syntax errors after a recent complex edit that broke its structure. Please analyze it, identify the syntax errors, and correct the file to make it valid JavaScript again, ensuring the original logic is preserved as much as possible."
8. Interacting with GitHub (e.g., Creating a Pull Request):
- "Your work folder is /Users/steipete/my_project\n\nCreate a GitHub Pull Request in the repository 'owner/repo' from the 'feature-branch' to the 'main' branch. Title: 'feat: Implement new login flow'. Body: 'This PR adds a new and improved login experience for users.'"
9. Interacting with GitHub (e.g., Checking PR CI Status):
- "Your work folder is /Users/steipete/my_project\n\nCheck the status of CI checks for Pull Request #42 in the GitHub repository 'owner/repo'. Report if they have passed, failed, or are still running."

This example illustrates the AI agent handling a more complex, multi-step task, such as preparing a release by creating a branch, updating multiple files (package.json, CHANGELOG.md), committing changes, and initiating a pull request, all within a single, coherent operation.

CRITICAL: Remember to provide Current Working Directory (CWD) context in your prompts for file system or git operations (e.g., "Your work folder is /path/to/project\n\n...your command...").
- "Command not found" (claude-code-mcp): If installed globally, ensure the npm global bin directory is in your system's PATH. If using npx, ensure npx itself is working.claude/doctor
- "Command not found" (claude or ~/.claude/local/claude): Ensure the Claude CLI is installed correctly. Run or check its documentation.MCP_CLAUDE_DEBUG
- Permissions Issues: Make sure you've run the "Important First-Time Setup" step.
- JSON Errors from Server: If is true, error messages or logs might interfere with MCP's JSON parsing. Set to false for normal operation.
- ESM/Import Errors: Ensure you are using Node.js v20 or later.
For Developers: Local Setup & Contribution
If you want to develop or contribute to this server, or run it from a cloned repository for testing, please see our Local Installation & Development Setup Guide.
The project includes comprehensive test suites:
`bashRun all tests
npm test
For detailed testing documentation, see our E2E Testing Guide.
Manual Testing with MCP Inspector
You can manually test the MCP server using the Model Context Protocol Inspector:
`bash
Build the project first
npm run buildStart the MCP Inspector with the server
npx @modelcontextprotocol/inspector node dist/server.js
`This will open a web interface where you can:
1. View all available tools (
run, list_processes, get_result, kill_process)
2. Test each tool with different parameters
3. Test different AI models including:
- Claude models: sonnet, opus, haiku
- Codex models: gpt-5.2-codex, gpt-5.1-codex-mini, gpt-5.1-codex-max, gpt-5.2, gpt-5.1, gpt-5.1-codex, gpt-5-codex, gpt-5-codex-mini, gpt-5
- Gemini models: gemini-2.5-pro, gemini-2.5-flash, gemini-3-pro-preview, gemini-3-flash-previewExample test: Select the
run tool and provide:
- prompt: "What is 2+2?"
- workFolder: "/tmp"
- model: "gemini-2.5-flash"Configuration via Environment Variables
The server's behavior can be customized using these environment variables:
-
CLAUDE_CLI_PATH: Absolute path to the Claude CLI executable.
- Default: Checks ~/.claude/local/claude, then falls back to claude (expecting it in PATH).
- MCP_CLAUDE_DEBUG: Set to true for verbose debug logging from this MCP server. Default: false.These can be set in your shell environment or within the
env block of your mcp.json server configuration (though the env block in mcp.json examples was removed for simplicity, it's still a valid way to set them for the server process if needed).Contributing
Contributions are welcome! Please refer to the Local Installation & Development Setup Guide for details on setting up your environment.
Submit issues and pull requests to the GitHub repository.
Advanced Configuration (Optional)
Normally not required, but useful for customizing CLI paths or debugging.
-
CLAUDE_CLI_NAME: Override the Claude CLI binary name or provide an absolute path (default: claude)
- CODEX_CLI_NAME: Override the Codex CLI binary name or provide an absolute path (default: codex)
- GEMINI_CLI_NAME: Override the Gemini CLI binary name or provide an absolute path (default: gemini)
- MCP_CLAUDE_DEBUG: Enable debug logging (set to true for verbose output)CLI Name Specification:
- Command name only:
CLAUDE_CLI_NAME=claude-custom
- Absolute path: CLAUDE_CLI_NAME=/path/to/custom/claude
Relative paths are not supported.$3
`json
"ai-cli-mcp": {
"command": "npx",
"args": [
"-y",
"ai-cli-mcp@latest"
],
"env": {
"CLAUDE_CLI_NAME": "claude-custom",
"CODEX_CLI_NAME": "codex-custom"
}
},
``MIT