Context optimization tools MCP server for AI coding assistants - compatible with GitHub Copilot, Cursor AI, and other MCP-supporting assistants
npm install context-optimizer-mcp-server!npm version !license !node !tests
A Model Context Protocol (MCP) server that provides context optimization tools for AI coding assistants including GitHub Copilot, Cursor AI, Claude Desktop, and other MCP-compatible assistants enabling them to extract targeted information rather than processing large terminal outputs and files wasting their context.
> This MCP server is the evolution of the VS Code Copilot Context Optimizer extension, but with compatibility across MCP-supporting applications.
Have you ever experienced this with your AI coding assistant (like Copilot, Claude Code, or Cursor)?
* ๐ Your assistant keeps compacting/summarizing conversations and losing a bit of the context in the process.
* ๐ฅ๏ธ Terminal outputs flood the context with hundreds of lines when the assistant only needs key information.
* ๐ Large files overwhelm the context when the assistant just needs to check one specific thing.
* โ ๏ธ "Context limit reached" messages interrupting your workflow.
* ๐ง Your assistant "forgets" earlier parts of your conversation due to context overflow.
* ๐ซ The reasoning quality drops when you have a longer conversation.
The Root Cause: When your assistant:
* Reads long logs during builds, tests, lints, etc. after executing a terminal command.
* Reads a large file (or multiple) in full just to answer a question when it doesn't need the whole code.
* Reads multiple web pages from the web to search a topic to learn how to do something.
* Or just during a long conversation.
The assistant will either:
* Start compacting, summarizing or truncating the conversation history.
* Drop the quality of reasoning.
* Lose track of earlier context and decisions.
* Become less helpful as it loses focus.
The Solution:
This server provides any MCP-compatible assistant with specialized tools that extract only the specific information you need, keeping your chat context clean and focused on productive problem-solving rather than data management.
- ๐ File Analysis Tool (askAboutFile) - Extract specific information from files without loading entire contents
- ๐ฅ๏ธ Terminal Execution Tool (runAndExtract) - Execute commands and extract relevant information using LLM analysis
- โ Follow-up Questions Tool (askFollowUp) - Continue conversations about previous terminal executions
- ๐ฌ Research Tools (researchTopic, deepResearch) - Conduct web research using Exa.ai's API
- ๐ Security Controls - Path validation, command filtering, and session management
- ๐ง Multi-LLM Support - Works with Google Gemini, Claude (Anthropic), and OpenAI
- โ๏ธ Environment Variable Configuration - API key management through system environment variables
- ๐๏ธ Simple Configuration - Environment variables only, no config files to manage
- ๐งช Comprehensive Testing - Unit tests, integration tests, and security validation
1. Install globally:
``bash`
npm install -g context-optimizer-mcp-server
2. Set environment variables (see docs/guides/usage.md for OS-specific instructions):
`bash`
export CONTEXT_OPT_LLM_PROVIDER="gemini"
export CONTEXT_OPT_GEMINI_KEY="your-gemini-api-key"
export CONTEXT_OPT_EXA_KEY="your-exa-api-key"
export CONTEXT_OPT_ALLOWED_PATHS="/path/to/your/projects"
3. Add to your MCP client configuration:
like "mcpServers" in claude_desktop_config.json (Claude Desktop) or "servers" in mcp.json (VS Code).`json`
"context-optimizer": {
"command": "context-optimizer-mcp"
}
For complete setup instructions including OS-specific environment variable configuration and AI assistant setup, see docs/guides/usage.md.
- askAboutFile - Extract specific information from files without loading entire contents into chat context. Perfect for checking if files contain specific functions, extracting import/export statements, or understanding file purpose without reading the full content.
- runAndExtract - Execute terminal commands and intelligently extract relevant information using LLM analysis. Supports non-interactive commands with security validation, timeouts, and session management for follow-up questions.
- askFollowUp - Continue conversations about previous terminal executions without re-running commands. Access complete context from previous runAndExtract calls including full command output and execution details.
- researchTopic - Conduct quick, focused web research on software development topics using Exa.ai's research capabilities. Get current best practices, implementation guidance, and up-to-date information on evolving technologies.
- deepResearch - Comprehensive research and analysis using Exa.ai's exhaustive capabilities for critical decision-making and complex architectural planning. Ideal for strategic technology decisions, architecture planning, and long-term roadmap development.
For detailed tool documentation and examples, see docs/tools.md and docs/guides/usage.md.
All documentation is organized under the docs/ directory:
| Topic | Location | Description |
|-------|----------|-------------|
| Architecture | docs/architecture.md | System design and component overview |docs/tools.md
| Tools Reference | | Complete tool documentation and examples |docs/guides/usage.md
| Usage Guide | | Complete setup and configuration |docs/guides/vs-code-setup.md
| VS Code Setup | | VS Code specific configuration |docs/guides/troubleshooting.md
| Troubleshooting | | Common issues and solutions |docs/reference/api-keys.md
| API Keys | | API key management |docs/reference/testing.md
| Testing | | Testing framework and procedures |docs/reference/changelog.md
| Changelog | | Version history |docs/reference/contributing.md
| Contributing | | Development guidelines |docs/reference/security.md
| Security | | Security policy |docs/reference/code-of-conduct.md
| Code of Conduct | | Community guidelines |
for complete setup instructions
- Tools Reference: Check docs/tools.md for detailed tool documentation
- Troubleshooting: Check docs/guides/troubleshooting.md for common issues
- VS Code Setup: Follow docs/guides/vs-code-setup.md for VS Code configurationTesting
`bash
Run all tests (skips LLM integration tests without API keys)
npm testRun tests with API keys for full integration testing
Set environment variables first:
export CONTEXT_OPT_LLM_PROVIDER="gemini"
export CONTEXT_OPT_GEMINI_KEY="your-gemini-key"
export CONTEXT_OPT_EXA_KEY="your-exa-key"
npm test # Now runs all tests including LLM integrationRun in watch mode
npm run test:watch
``For comprehensive end-to-end testing with an AI assistant, see the Manual Testing Setup Guide. This provides a workflow-based testing protocol that validates all tools through realistic scenarios.
For detailed testing setup, see docs/reference/testing.md.
Contributions are welcome! Please read docs/reference/contributing.md for guidelines on development workflow, coding standards, testing, and submitting pull requests.
- Code of Conduct: See docs/reference/code-of-conduct.md
- Security Reports: Follow docs/reference/security.md for responsible disclosure
- Issues: Use GitHub Issues for bugs & feature requests
- Pull Requests: Ensure tests pass and docs are updated
- Discussions: (If enabled) Use for open-ended questions/ideas
MIT License - see LICENSE file for details.
- VS Code Copilot Context Optimizer โ Original VS Code extension (companion project)