BTCA server for answering questions about your codebase using OpenCode AI
npm install btca-serverBTCA (Better Context AI) server for answering questions about your codebase using OpenCode AI.
``bash`
bun add btca-server
`typescript
import { startServer } from 'btca-server';
// Start with default options (port 8080 or process.env.PORT)
const server = await startServer();
console.log(Server running at ${server.url});
// Start with custom port
const server = await startServer({ port: 3000 });
// Start with quiet mode (no logging)
const server = await startServer({ port: 3000, quiet: true });
// Stop the server when needed
server.stop();
`
The startServer function returns a ServerInstance object:
`typescript`
interface ServerInstance {
port: number; // Actual port the server is running on
url: string; // Full URL (e.g., "http://localhost:8080")
stop: () => void; // Function to stop the server
}
You can pass port: 0 to let the OS assign a random available port:
`typescriptServer running on port ${server.port}
const server = await startServer({ port: 0 });
console.log();`
Once the server is running, it exposes the following REST API endpoints:
``
GET /
Returns service status and version info.
``
GET /config
Returns current configuration (provider, model, resources).
``
GET /resources
Lists all configured resources (local directories or git repositories).
``
POST /config/resources
Add a new resource (git or local).
``
DELETE /config/resources
Remove a resource by name.
``
POST /clear
Clear all locally cloned resources.
``
POST /question
Ask a question (non-streaming response). The resources field accepts configured resource names or HTTPS Git URLs.
``
POST /question/stream
Ask a question with streaming SSE response. The resources field accepts configured resource names or HTTPS Git URLs.done
The final SSE event may include optional usage/metrics (tokens, timing, throughput, and best-effort pricing).
``
PUT /config/model
Update the AI provider and model configuration.
The server reads configuration from ~/.btca/config.toml or your local project's .btca/config.toml. You'll need to configure:
- AI Provider: OpenCode AI provider (e.g., "opencode", "anthropic")
- Model: AI model to use (e.g., "claude-3-7-sonnet-20250219")
- Resources: Local directories or git repositories to query
Example config.toml:
`toml
provider = "anthropic"
model = "claude-3-7-sonnet-20250219"
resourcesDirectory = "~/.btca/resources"
[[resources]]
type = "local"
name = "my-project"
path = "/path/to/my/project"
[[resources]]
type = "git"
name = "some-repo"
url = "https://github.com/user/repo"
branch = "main"
`
BTCA supports the following providers only:
- opencode — API key requiredopenrouter
- — API key requiredopenai
- — OAuth onlygoogle
- — API key or OAuthanthropic
- — API key required
Authenticate providers via OpenCode:
`bash`
opencode auth --provider
- OpenCode and OpenRouter can use environment variables or OpenCode auth.
- OpenAI requires OAuth (API keys are not supported).
- Anthropic requires an API key.
- Google supports API key or OAuth.
- PORT: Server port (default: 8080)OPENCODE_API_KEY
- : OpenCode API key (required when provider is opencode)OPENROUTER_API_KEY
- : OpenRouter API key (required when provider is openrouter)OPENROUTER_BASE_URL
- : Override OpenRouter base URL (optional)OPENROUTER_HTTP_REFERER
- : Optional OpenRouter header for rankingsOPENROUTER_X_TITLE
- : Optional OpenRouter header for rankings
The package exports TypeScript types for use with Hono RPC client:
`typescript
import type { AppType } from 'btca-server';
import { hc } from 'hono/client';
const client = hc
`
For working with SSE streaming responses:
`typescript`
import type { BtcaStreamEvent, BtcaStreamMetaEvent } from 'btca-server/stream/types';
- Bun: >= 1.1.0 (this package is designed specifically for Bun runtime)
- OpenCode API Key: Required when using the opencode` provider
MIT