MCP server for Azure DevOps agent and queue management
npm install @rxreyn3/azure-devops-mcpA Model Context Protocol (MCP) server for interacting with Azure DevOps agents and queues.
When creating your Personal Access Token (PAT) in Azure DevOps, you must grant:
- Agent Pools (read) - Required for agent management tools and queueing builds
- Build (read) - Required for listing builds and viewing timelines
- Build (read & execute) - Required for queueing new builds
This server provides tools with different scope requirements:
| Tool | Minimum Scope | Description |
|------|--------------|-------------|
| project_* tools | Project | Access project queues and basic information |
| org_* tools | Organization | Access agent details (agents exist at org level) |
| build_* tools | Project | Access build timelines and execution details |
1. Go to Azure DevOps → User Settings → Personal Access Tokens
2. Click "New Token"
3. Select your organization
4. Set expiration as needed
5. For full functionality, select:
- Scope: Organization (not project-specific)
- Permissions:
- Agent Pools (read) - For agent management tools
- Build (read & execute) - For build operations (list, view, queue)
> Note: Project-scoped PATs will only work with project_ and build_ tools. The org_* tools require organization-level access because agents are managed at the organization level in Azure DevOps.
The env field in MCP client configurations (Claude Desktop, Claude Code, Windsurf) passes environment variables directly to the MCP server process. While convenient, never share your configuration files containing actual PAT values.
1. Direct Configuration (Simplest)
- Add your credentials directly to the configuration file
- Keep the file secure and never commit it to version control
- This is suitable for personal use on trusted machines
2. Environment Variable Reference (Most Secure)
- Some MCP clients support referencing system environment variables
- Set your credentials as system environment variables first:
``bash`
# macOS/Linux
export ADO_PAT="your-actual-pat-value"
# Windows PowerShell
$env:ADO_PAT = "your-actual-pat-value"
- Then reference them in your config (if supported by your client)
3. Configuration Management
- Store a template configuration in version control with placeholder values
- Keep your actual configuration with real values locally
- Use .gitignore to prevent accidental commits
- Create dedicated PATs for MCP usage with minimal required permissions
- Set short expiration dates and rotate regularly
- Use different PATs for different projects or environments
- Never share PATs in documentation, issues, or support requests
- Revoke immediately if you suspect compromise
If you choose to use system environment variables:
#### macOS/Linux
`bashAdd to ~/.bashrc, ~/.zshrc, or ~/.profile
export ADO_ORGANIZATION="https://dev.azure.com/your-organization"
export ADO_PROJECT="your-project-name"
export ADO_PAT="your-personal-access-token"
#### Windows (PowerShell)
`powershell
Set user environment variables (permanent)
[System.Environment]::SetEnvironmentVariable("ADO_ORGANIZATION", "https://dev.azure.com/your-organization", "User")
[System.Environment]::SetEnvironmentVariable("ADO_PROJECT", "your-project-name", "User")
[System.Environment]::SetEnvironmentVariable("ADO_PAT", "your-personal-access-token", "User")Restart your terminal for changes to take effect
`#### Windows (Command Prompt)
`cmd
Set user environment variables (permanent)
setx ADO_ORGANIZATION "https://dev.azure.com/your-organization"
setx ADO_PROJECT "your-project-name"
setx ADO_PAT "your-personal-access-token"Restart your terminal for changes to take effect
`Note: Setting system environment variables is optional. The MCP client's
env field will pass these values directly to the server process regardless of your system environment.Installation & Usage
This MCP server can be used with Windsurf, Claude Desktop, and Claude Code. All methods use
npx to run the package directly without installation.$3
Add the following to your Windsurf settings at
~/.windsurf/settings.json:`json
{
"mcpServers": {
"azure-devops": {
"command": "npx",
"args": ["-y", "@rxreyn3/azure-devops-mcp@latest"],
"env": {
"ADO_ORGANIZATION": "https://dev.azure.com/your-organization",
"ADO_PROJECT": "your-project-name",
"ADO_PAT": "your-personal-access-token"
}
}
}
}
`$3
Add the following to your Claude Desktop configuration:
macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%/Claude/claude_desktop_config.json
Linux: ~/.config/Claude/claude_desktop_config.json`json
{
"mcpServers": {
"azure-devops": {
"command": "npx",
"args": ["-y", "@rxreyn3/azure-devops-mcp@latest"],
"env": {
"ADO_ORGANIZATION": "https://dev.azure.com/your-organization",
"ADO_PROJECT": "your-project-name",
"ADO_PAT": "your-personal-access-token"
}
}
}
}
`$3
Use the Claude Code CLI to add the server with environment variables:
`bash
claude mcp add azure-devops \
-e ADO_ORGANIZATION="https://dev.azure.com/your-organization" \
-e ADO_PROJECT="your-project-name" \
-e ADO_PAT="your-personal-access-token" \
-- npx -y @rxreyn3/azure-devops-mcp@latest
`$3
Replace the following values in any of the above configurations:
-
your-organization: Your Azure DevOps organization name
- your-project-name: Your Azure DevOps project name
- your-personal-access-token: Your PAT with Agent Pools (read) permissionAvailable Tools
$3
These tools work with project-scoped PATs:
-
project_health_check - Test connection and verify permissions
- project_list_queues - List all agent queues in the project
- project_get_queue - Get detailed information about a specific queue$3
These tools require organization-level PAT permissions:
-
org_find_agent - Search for an agent across all organization pools
- org_list_agents - List agents from project pools with filtering options$3
These tools work with project-scoped PATs:
-
build_list - List builds with filtering and pagination support (requires Build read)
- Filter by pipeline name (partial match), status, result, branch, or date range
- Date filtering with minTime/maxTime parameters (e.g., "2024-01-01", "2024-01-31T23:59:59Z")
- Returns build details including ID, number, status, and timing
- Supports pagination for large result sets
- build_list_definitions - List pipeline definitions to find IDs and names (requires Build read)
- Filter by name (partial match)
- Useful for discovering pipeline IDs needed for other operations
- build_get_timeline - Get the timeline for a build showing all jobs, tasks, and which agents executed them (requires Build read)
- Requires a build ID (use build_list to find build IDs)-
build_queue - Queue (launch) a new build for a pipeline definition (requires Build read & execute AND Agent Pools read)
- Required: definitionId (use build_list_definitions to find)
- Optional: sourceBranch, parameters (key-value pairs), reason, demands, queueId
- Returns the queued build details including ID and status-
build_download_job_logs - Download logs for a specific job from a build by job name (requires Build read)
- Required: buildId, jobName (e.g., "GPU and System Diagnostics")
- Optional: outputPath (if not provided, saves to managed temp directory)
- Streams log content to file for efficient memory usage
- Smart filename generation when outputPath is a directory
- Validates job completion status before downloading
- Returns saved file path, size, job details, duration, and whether file is temporary-
build_download_logs_by_name - Download logs for a stage, job, or task by searching for its name in the build timeline (requires Build read)
- Required: buildId, name (e.g., "Deploy", "Trigger Async Shift Upload")
- Optional: outputPath (if not provided, saves to managed temp directory), exactMatch (default: true) - set to false for partial/case-insensitive matching
- Automatically detects whether the name refers to a stage, job, or task
- For stages/phases: Downloads all child job logs into an organized directory structure
- For jobs: Downloads the job log (same as build_download_job_logs)
- For tasks: Downloads the individual task log with parent job context
- Handles multiple matches by showing all options and requesting clarification
- Returns downloaded log paths, sizes, matched record details, and whether files are temporary-
build_list_artifacts - List all artifacts available for a specific build (requires Build read)
- Required: buildId
- Returns artifact names, IDs, types, and download URLs
- Shows metadata about published build artifacts-
build_download_artifact - Download a Pipeline artifact from a build using signed URLs (requires Build read)
- Required: buildId, artifactName (e.g., "RenderLogs")
- Optional: outputPath (if not provided, saves to managed temp directory), definitionId (from build.definition.id) - will be fetched automatically if not provided
- Only supports Pipeline artifacts (created with PublishPipelineArtifact task)
- Downloads artifacts as ZIP files using temporary signed URLs
- Smart filename generation when outputPath is a directory
- Returns saved file path, size, artifact details, and whether file is temporary$3
These tools help manage downloaded files in the temporary directory:
-
list_downloads - List all files downloaded to the temporary directory
- Shows all logs and artifacts downloaded by this MCP server
- Returns file paths, sizes, download times, and age
- Groups files by category (logs/artifacts) and build ID
- Shows the temporary directory location-
cleanup_downloads - Remove old downloaded files from the temporary directory
- Optional: olderThanHours (default: 24) - remove files older than this many hours
- Returns number of files removed and space saved
- Reports any errors encountered during cleanup-
get_download_location - Get information about the temporary directory
- Shows the temp directory path
- Reports total size and file count
- Shows information about the oldest fileTemporary File Handling
When download tools (
build_download_job_logs, build_download_logs_by_name, build_download_artifact) are called without an outputPath, files are automatically saved to a managed temporary directory:- Structure:
/tmp/ado-mcp-server-{pid}/downloads/{category}/{buildId}/{filename}
- Automatic Cleanup: Old temp directories from previous sessions are cleaned on startup
- Process Isolation: Each server instance uses its own temp directory
- File Persistence: Files persist until manually cleaned up using cleanup_downloads or server restartThis prevents workspace pollution and makes it easier for AI models to track downloaded files.
Example Interactions
Ask your AI assistant questions like:
- "List all builds that failed today"
- "Find which agent ran build 12345"
- "Show me all available build queues in the project"
- "Check if agent BM40-BUILD-01 is online"
- "Get the last 5 builds for the preflight pipeline"
- "Which builds are currently running?"
- "Show me builds from January 2024" (uses date filtering with minTime/maxTime)
- "List failed builds between 2024-01-15 and 2024-01-20"
- "Queue a build for pipeline X with parameter Y=Z"
- "Launch the nightly build with custom branch refs/heads/feature/test"
- "Download the logs for GPU and System Diagnostics from build 5782897"
- "Save the job logs for 'Test 3: With Render Optimizations' to ./logs/"
- "What artifacts are available for build 5782897?"
- "Download the RenderLogs artifact from build 5782897"
Error Handling
If you encounter permission errors:
1. Verify your PAT has the required permissions:
- Agent Pools (read) - For agent management tools and
build_queue
- Build (read) - For listing builds and viewing timelines
- Build (read & execute) - For queueing new builds with build_queue
2. For org_* tools, ensure your PAT is organization-scoped, not project-scoped
3. For build_queue`, you need BOTH "Build (read & execute)" AND "Agent Pools (read)"Common error messages:
- "Access denied" - Your PAT lacks necessary permissions
- "Resource not found" - The queue/agent/build doesn't exist or you lack access
- "Invalid authentication" - Your PAT may be expired or incorrectly formatted
- "Timeline not found" - The build ID doesn't exist or doesn't have timeline data
We welcome contributions! Please see our Contributing Guide for details on:
- Development setup
- Adding new tools
- Testing guidelines
- Submitting pull requests
MIT