A powerful coding agent CLI with HTTP API for development environments
npm install sparkecoderA powerful coding agent CLI with HTTP API. Built with the Vercel AI SDK.
- ๐ค Multi-Agent Sessions - Run multiple agents simultaneously with isolated contexts
- ๐ Streaming Responses - Real-time SSE streaming following Vercel AI SDK data stream protocol
- ๐ง Powerful Tools - Bash execution, file operations, planning, and skill loading
- โ
Tool Approvals - Configurable approval requirements for dangerous operations
- ๐ Skills System - Load specialized knowledge documents into context
- ๐พ SQLite Persistence - Full session and message history storage
- ๐ HTTP API - RESTful API with auto-generated OpenAPI specification via hono-openapi
- ๐ฏ Context Management - Automatic summarization for long conversations
``bashConfigure npm to use GitHub Packages for @gostudyfetchgo scope
npm config set @gostudyfetchgo:registry https://npm.pkg.github.com
$3
`bash
Clone the repository
git clone https://github.com/gostudyfetchgo/sparkecoder.git
cd sparkecoderInstall dependencies
pnpm installSet up environment variables
export AI_GATEWAY_API_KEY=your_api_key_hereStart the server
pnpm dev
`$3
`bash
npm install -g @gostudyfetchgo/sparkecoder
sparkecoder start
`Quick Start
$3
`bash
sparkecoder init
`This creates a
sparkecoder.config.json file with default settings.$3
`bash
sparkecoder start
`The server runs at
http://localhost:3141 by default.$3
`bash
Create a session and run a prompt
curl -X POST http://localhost:3141/agents/quick \
-H "Content-Type: application/json" \
-d '{"prompt": "List the files in the current directory"}'
`API Reference
$3
| Endpoint | Method | Description |
|----------|--------|-------------|
|
/sessions | GET | List all sessions |
| /sessions | POST | Create a new session |
| /sessions/:id | GET | Get session details |
| /sessions/:id | DELETE | Delete a session |
| /sessions/:id/messages | GET | Get session messages |
| /sessions/:id/clear | POST | Clear session context |$3
| Endpoint | Method | Description |
|----------|--------|-------------|
|
/agents/:id/run | POST | Run agent with streaming (SSE) |
| /agents/:id/generate | POST | Run agent without streaming |
| /agents/:id/approve/:toolCallId | POST | Approve pending tool |
| /agents/:id/reject/:toolCallId | POST | Reject pending tool |
| /agents/:id/approvals | GET | Get pending approvals |
| /agents/quick | POST | Create session and run in one request |$3
Full OpenAPI specification available at
/openapi.json.Configuration
Create a
sparkecoder.config.json file:`json
{
"defaultModel": "anthropic/claude-sonnet-4-20250514",
"workingDirectory": ".",
"toolApprovals": {
"bash": true,
"write_file": false,
"read_file": false
},
"skills": {
"directory": "./skills"
},
"context": {
"maxChars": 200000,
"autoSummarize": true
},
"server": {
"port": 3141,
"host": "127.0.0.1",
"publicUrl": "http://your-server:3141"
}
}
`$3
| Option | Description | Default |
|--------|-------------|---------|
|
defaultModel | Vercel AI Gateway model string | anthropic/claude-opus-4-6 |
| workingDirectory | Base directory for file operations | Current directory |
| toolApprovals | Which tools require user approval | { bash: true } |
| skills.directory | Directory containing skill files | ./skills |
| context.maxChars | Max context size before summarization | 200000 |
| context.autoSummarize | Enable automatic summarization | true |
| server.port | HTTP server port | 3141 |
| server.host | HTTP server host | 127.0.0.1 |
| server.publicUrl | Public URL for web UI (Docker/remote) | Auto-detected |Tools
$3
Execute shell commands in the working directory.`json
{
"command": "ls -la"
}
`$3
Read file contents with optional line range.`json
{
"path": "src/index.ts",
"startLine": 1,
"endLine": 50
}
`$3
Write or edit files. Supports two modes:Full write:
`json
{
"path": "new-file.ts",
"mode": "full",
"content": "// New file content"
}
`String replacement:
`json
{
"path": "existing-file.ts",
"mode": "str_replace",
"old_string": "const x = 1;",
"new_string": "const x = 2;"
}
`$3
Manage task lists for complex operations.`json
{
"action": "add",
"items": [
{ "content": "Step 1: Analyze code" },
{ "content": "Step 2: Implement fix" }
]
}
`$3
Load specialized knowledge into context.`json
{
"action": "load",
"skillName": "Debugging"
}
`Skills
Skills are markdown files with specialized knowledge. Place them in your skills directory:
`markdown
---
name: My Custom Skill
description: Description of what this skill provides
---My Custom Skill
Detailed content that will be loaded into context...
`Built-in skills:
- Debugging - Systematic debugging approaches
- Code Review - Code review checklists and best practices
- Refactoring - Safe refactoring patterns and techniques
CLI Commands
`bash
sparkecoder start # Start the HTTP server
sparkecoder chat # Interactive chat with the agent
sparkecoder init # Create config file
sparkecoder sessions # List all sessions
sparkecoder status # Check if server is running
sparkecoder config # Show current configuration
sparkecoder info # Show version and environment
`$3
Start an interactive chat session with the agent:
`bash
Start a new chat session
sparkecoder chatResume an existing session
sparkecoder chat --session Start with custom options
sparkecoder chat --name "My Project" --model "anthropic/claude-sonnet-4-20250514"
`In-chat commands:
-
/quit or /exit - Exit the chat
- /clear - Clear conversation history
- /session - Show current session info
- /tools - List available toolsStreaming Protocol
The API uses Server-Sent Events (SSE) following the Vercel AI SDK data stream protocol.
Compatible with
useChat from @ai-sdk/react:`tsx
import { useChat } from '@ai-sdk/react';const { messages, sendMessage } = useChat({
api: 'http://localhost:3141/agents/SESSION_ID/run',
});
`Tool Approvals
Configure which tools require approval:
`json
{
"toolApprovals": {
"bash": true, // Requires approval
"write_file": true // Requires approval
}
}
`When approval is required:
1. The agent pauses and streams an
approval-required event
2. Call /agents/:id/approve/:toolCallId to approve
3. Call /agents/:id/reject/:toolCallId to rejectEnvironment Variables
| Variable | Description |
|----------|-------------|
|
AI_GATEWAY_API_KEY | Vercel AI Gateway API key (required) |
| SPARKECODER_MODEL | Override default model |
| SPARKECODER_PORT | Override server port |
| DATABASE_PATH | Override database path |Docker / Remote Access
When running SparkECoder in Docker or exposing it to remote clients, you need to configure the public URL so the web UI can connect to the API from the browser.
$3
`bash
sparkecoder start --public-url http://your-server:3141
`$3
`json
{
"server": {
"port": 3141,
"host": "0.0.0.0",
"publicUrl": "http://your-server:3141"
}
}
`Notes:
- Set
host to 0.0.0.0 to bind to all interfaces (required for Docker/remote access)
- Set publicUrl to the URL the browser will use to reach the API
- The web UI detects this URL automatically on first load and stores it in localStorageDevelopment
`bash
Run in development mode with hot reload
pnpm devType check
pnpm typecheckBuild for production
pnpm buildRun production build
pnpm start
`Testing
SparkECoder includes comprehensive end-to-end tests that make actual API calls to the LLM.
`bash
Run all E2E tests (requires AI_GATEWAY_API_KEY)
pnpm test:e2eRun tests in watch mode
pnpm test:watch
`The tests cover:
- Health & server endpoints
- Session management (CRUD operations)
- Agent text generation (streaming & non-streaming)
- File operations (create, read, edit)
- Bash command execution
- Todo management
- Multi-turn conversations with context
- Tool approvals workflow
Note: E2E tests require a valid
AI_GATEWAY_API_KEY and will make real LLM calls. They create a temporary .test-workspace` directory that is cleaned up after tests complete.Proprietary - All rights reserved.