OpenAI & Anthropic compatible API router for GitHub Copilot SDK - Use Copilot with Claude Code, OpenAI Codex CLI, and more
npm install github-copilot-router

Run Claude Code and OpenAI Codex with your Copilot subscription.
One command. No extra API keys. Just your existing GitHub Copilot plan.
1. Install GitHub Copilot CLI
``bash
# macOS/Linux
brew install copilot-cli
# Windows
winget install GitHub.Copilot
# npm (macOS, Linux, and Windows)
npm install -g @github/copilot
`
2. Authenticate
`bash`
copilot
# Inside the CLI, type:
/login
3. Install and run
`bash
npm install -g github-copilot-router
gcr cc # Launch Claude Code
gcr cx # Launch OpenAI Codex
gcr # Start the router server only
`
The server will start at http://localhost:7318.
> Tip: You can also authenticate via GITHUB_TOKEN environment variable with a PAT that has "Copilot Requests" permission.
Claude Code can be configured to use this router as its backend, allowing you to use GitHub Copilot models through Claude Code's interface.
The easiest way to use Claude Code with the router - no configuration needed:
`bash`
gcr ccor: gcr claude-code
This starts the router, launches Claude Code with the correct environment variables, and cleans up when you exit. All arguments are passed through:
`bash`
gcr cc --resume
gcr cc --dangerously-skip-permissions
If you prefer to run the router separately:
1. Start the router (keep it running in a terminal):
`bash`
gcr
2. Configure Claude Code by creating/editing .claude/settings.json in your project:
`json`
{
"env": {
"ANTHROPIC_BASE_URL": "http://localhost:7318",
"ANTHROPIC_AUTH_TOKEN": "not-required",
"ANTHROPIC_API_KEY": "",
"ANTHROPIC_DEFAULT_HAIKU_MODEL": "github-copilot/claude-haiku-4.5",
"ANTHROPIC_DEFAULT_SONNET_MODEL": "github-copilot/claude-sonnet-4.5",
"ANTHROPIC_DEFAULT_OPUS_MODEL": "github-copilot/claude-opus-4.5"
}
}
3. Restart Claude Code to pick up the new configuration.
- ANTHROPIC_AUTH_TOKEN can be any non-empty string (authentication is handled by GitHub Copilot)ANTHROPIC_API_KEY
- should be empty or omitted
- Model names in the config should match models available in GitHub Copilot
OpenAI Codex CLI can be configured to use this router as a custom model provider.
The easiest way to use Codex with the router - no configuration needed:
`bash`
gcr cxor: gcr codex
This starts the router, launches Codex with the correct provider configuration, and cleans up when you exit. All arguments are passed through:
`bash`
gcr cx --model gpt-4o
gcr cx --full-auto "fix the tests"
If you prefer to run the router separately:
1. Start the router (keep it running in a terminal):
`bash`
gcr
2. Configure Codex CLI by creating/editing ~/.codex/config.toml:
`toml
model = "gpt-5.2-codex"
model_provider = "proxy"
[model_providers.proxy]
name = "OpenAI using GitHub Copilot Router"
base_url = "http://localhost:7318/v1"
wire_api = "responses"
`
3. Run Codex as normal:
`bash`
codex
It will now route requests through GitHub Copilot.
- Model name should match a model available in GitHub Copilot (e.g., gpt-5.2-codex, gpt-4o, claude-sonnet-4.5)
- No API key configuration needed - authentication is handled by GitHub Copilot
| Endpoint | Method | Format | Description |
|----------|--------|--------|-------------|
| /v1/responses | POST | OpenAI | Responses API (recommended) |/v1/responses/input_tokens
| | POST | OpenAI | Token counting |/v1/chat/completions
| | POST | OpenAI | Chat completions (legacy) |/v1/models
| | GET | OpenAI | List available models |/v1/messages
| | POST | Anthropic | Messages API |/v1/messages/count_tokens
| | POST | Anthropic | Token counting |/health
| | GET | - | Health check |
> Note: The /v1/responses endpoint is the newer OpenAI Responses API format, which is recommended over /v1/chat/completions. Some clients like OpenAI Codex CLI use wire_api = "responses" configuration.
`bashNon-streaming
curl http://localhost:7318/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Hello!"}]
}'
$3
`bash
curl http://localhost:7318/v1/messages \
-H "Content-Type: application/json" \
-d '{
"model": "claude-sonnet-4.5",
"max_tokens": 1024,
"messages": [{"role": "user", "content": "Hello!"}]
}'
`$3
`python
from openai import OpenAIclient = OpenAI(
base_url="http://localhost:7318/v1",
api_key="not-required"
)
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
`$3
`python
from anthropic import Anthropicclient = Anthropic(
base_url="http://localhost:7318",
api_key="not-required"
)
response = client.messages.create(
model="claude-sonnet-4.5",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.content[0].text)
`Configuration
$3
| Command | Description |
|---------|-------------|
|
gcr | Start the router server |
| gcr claude-code | Launch Claude Code through the router |
| gcr cc | Alias for claude-code |
| gcr codex | Launch OpenAI Codex through the router |
| gcr cx | Alias for codex |> Note:
copilot-router is an alias for gcr (e.g., copilot-router cc works too).Options:
-
--port, -p - Port for the router (default: 7318)
- --help, -h - Show help
- --version, -v - Show version$3
| Variable | Default | Description |
|----------|---------|-------------|
|
PORT | 7318 | Server port |
| GITHUB_TOKEN | - | GitHub PAT for authentication |Troubleshooting
$3
You're not authenticated with GitHub Copilot. Follow the authentication steps above.
$3
You have AWS Copilot installed which conflicts with GitHub Copilot CLI. Either:
- Uninstall both GitHub Copilot and AWS Copilot:
brew uninstall copilot-cli`, and then install GitHub Copilot again.MIT