MCP server for efficient JSON manipulation
npm install json-mcpErgonomic MCP tools for efficient JSON manipulation in Claude Code.
Stop wrestling with bash/jq pipelines and multi-turn operations. json-mcp provides four simple tools that handle JSON files of any size with intuitive syntax and zero setup overhead.
Before (without MCP tools):
- 4+ turns to filter, transform, or update JSON
- Complex jq syntax and bash pipelines
- Frequent errors with large files
- Setup overhead (npm install, script writing)
After (with json-mcp):
- 1 turn for most operations
- Simple filter syntax: {"price": ">1000"}
- Handles 100MB+ files with automatic streaming
- Zero setup - just install and go
Rigorous testing across 5 real-world scenarios showed:
- 76% reduction in turns per task
- 83% reduction in tool calls
- 100% error elimination
- Perfect success rate maintained
``bash`
npm install -g json-mcp
Verify installation:
`bash`
json-mcp --version
Note: Ensure your npm global bin directory is on PATH. Check with:
`bash`
npm bin -g
`bash`
npx json-mcp@latest
Add to ~/.claude.json:
`json`
{
"mcpServers": {
"json-mcp": {
"command": "json-mcp"
}
}
}
After configuring, restart Claude Code for changes to take effect.
Create .mcp.json in your project root:
`json`
{
"mcpServers": {
"json-mcp": {
"command": "json-mcp"
}
}
}
This allows the MCP server to be automatically available when working in that directory. Commit .mcp.json to share the configuration with your team.
Extract and filter data from JSON files with simple syntax.
Features:
- Simple comparison operators: >, <, >=, <=, =count
- Grouping and aggregation: json
- Multiple output formats: , table, csv, keyslimit
- Automatic JSONL streaming for large files
- Preview mode with
Examples:
`typescript
// Find expensive products
json_query({
file: "products.json",
filter: { "price": ">1000" },
output: "table"
})
// Group error logs by type
json_query({
file: "logs.jsonl",
filter: { "level": "error" },
groupBy: "error_type",
aggregate: "count"
})
// Preview first 5 results
json_query({
file: "large-dataset.json",
filter: { "status": "active" },
limit: 5
})
`
Transform JSON structure and convert between formats.
Features:
- JSON → CSV conversion, and JSON → JSONL
- Pre-filtering before transformation
- Field mapping and renaming
- Handles large files efficiently
Examples:
`typescript
// Convert JSON to CSV
json_transform({
file: "products.json",
output: "products.csv",
format: "csv"
})
// Filter then transform
json_transform({
file: "users.json",
output: "active-users.csv",
format: "csv",
filter: { "status": "active" }
})
// Field mapping
json_transform({
file: "data.json",
output: "mapped.json",
format: "json",
template: {
"id": "sku",
"name": "title"
}
})
`
Update or create fields in JSON files atomically with wildcard paths.
Features:
- Create new fields: Automatically creates nested paths that don't exist
- Wildcard path syntax: **.fieldName (no jq expertise required)where
- Conditional updates with clause
- Dry-run preview mode
- Automatic backup creation
- Atomic writes (no broken JSON)
Examples:
`typescript
// Update existing field
json_update({
file: "config.json",
updates: [{
path: "$.server.port",
value: 8080
}]
})
// Create new nested field (creates intermediate objects automatically)
json_update({
file: "settings.json",
updates: [{
path: "$.mcpServers.vision",
value: {
command: "npx",
args: ["-y", "mcp-gemini-vision"]
}
}]
})
// Update all timeout values conditionally
json_update({
file: "config.json",
updates: [{
path: "**.timeout",
value: 60000,
where: { oldValue: 30000 }
}]
})
// Preview changes first
json_update({
file: "config.json",
updates: [{
path: "$.server.port",
value: 8080
}],
dryRun: true
})
// Multiple updates at once
json_update({
file: "settings.json",
updates: [
{ path: "$.theme", value: "dark" },
{ path: "$.notifications.enabled", value: true },
{ path: "$.newSection.subsection.field", value: "creates full path" },
{ path: "$.permissions.allow[0]", value: "mcp__chrome-devtools__*" } // creates array as needed
]
})
`
Validate JSON files against schemas with detailed error reporting.
Features:
- JSON Schema validation using Ajv
- Batch validation of multiple files
- Detailed or summary output modes
- Clear violation reporting
- 100% accuracy
Examples:
`typescript
// Validate single file
json_validate({
files: "data.json",
schema: "schema.json"
})
// Validate multiple files
json_validate({
files: ["file1.json", "file2.json", "file3.json"],
schema: "schema.json",
output: "detailed"
})
// Summary mode
json_validate({
files: "products.json",
schema: "product-schema.json",
output: "summary"
})
`
bash
Find all error logs from the last hour
json_query logs.jsonl --filter '{"level":"error","timestamp":">=2024-10-22T10:00:00Z"}' --groupBy error_type --aggregate count
`$3
`bash
Convert product catalog to CSV for Excel
json_transform products.json --output products.csv --format csv --filter '{"category":"Electronics"}'
`$3
`bash
Update all API timeouts across nested config
json_update config.json --path "**.timeout" --value 60000
`$3
`bash
Validate multiple data files against schema
json_validate data-*.json --schema product-schema.json --output detailed
`$3
Explore schema and structure of large JSON/JSONL files with streaming inference.
Features:
- Streaming inference for arrays/objects/JSONL
- JSON Schema generation (2020-12)
- Field statistics (types, null rates, examples)
- Sampling and item limits for very large datasets
- Map/dictionary detection: Objects with many dynamic keys (author names, dates, IDs) are automatically collapsed to
additionalProperties pattern instead of listing thousands of keysExamples:
`typescript
// Audit file structure
json_audit({ file: "data.json" })// Sample large files
json_audit({ file: "large.json", sampling: 0.25, maxItems: 50000 })
// Disable map collapsing for full schema
json_audit({ file: "data.json", collapseMapThreshold: 0 })
`Performance
json-mcp is designed to handle files of any size:
- Automatic streaming for JSONL files (tested with 100MB+)
- Memory-efficient operations with large JSON arrays
- Fast filtering without loading entire file into memory
- Atomic writes prevent data corruption
Tested on:
- 5MB product catalogs (51,000+ records)
- 100MB+ JSONL logs (500,000+ entries)
- 189MB GeoJSON files (deeply nested structures)
Architecture
json-mcp uses a declarative tool pattern that makes it easy to add new tools:
`typescript
export const tools: Tool[] = [
{
name: "json_query",
description: "Extract and query JSON data...",
inputSchema: { / JSON Schema / },
handler: jsonQueryHandler
},
// Add more tools...
];
`See ARCHITECTURE.md for implementation details.
Development
$3
`bash
npm install
npm run build
`$3
`bash
npm run dev
`$3
The project includes comprehensive test scenarios with real-world datasets. See ARCHIVE/ for testing methodology and results.
Contributing
Contributions welcome! The declarative architecture makes adding new tools straightforward:
1. Add tool definition to
src/tools/index.ts
2. Implement handler function
3. Update tests
4. Submit PRResearch & Validation
This project underwent rigorous comparative testing to validate its effectiveness:
- 5 real-world test scenarios (query, transform, update, validate, large files)
- Parallel baseline testing (without MCP tools)
- Comparative testing (with json-mcp tools)
- Quantified improvements (76% turn reduction, 83% tool call reduction)
For detailed methodology and results, see ARCHIVE/.
Challenge Results
We completed multiple JSON-MCP challenges to validate real-world usage:
- JSONTestSuite parsing compatibility: see
CHALLENGE_RESULTS.md
- Real dataset filtering/aggregation/transform: see REAL_CHALLENGE_RESULTS.mdCommands used and prompts are captured in
JSON_CHALLENGES.md`.MIT
- GitHub Repository
- npm Package
- Model Context Protocol
- Claude Code Documentation
- Issues: GitHub Issues
- Discussions: GitHub Discussions
---
Built with the Model Context Protocol for Claude Code