Multi-account rotation plugin for ChatGPT Plus/Pro (OAuth / Codex backend)
npm install oc-chatgpt-multi-auth



OAuth plugin for OpenCode that lets you use ChatGPT Plus/Pro rate limits with models like gpt-5.2, gpt-5.3-codex, and gpt-5.1-codex-max.
> [!NOTE]
> Renamed from opencode-openai-codex-auth-multi — If you were using the old package, update your config to use oc-chatgpt-multi-auth instead. The rename was necessary because OpenCode blocks plugins containing opencode-openai-codex-auth in the name.
- GPT-5.2, GPT-5.3 Codex, GPT-5.1 Codex Max and all GPT-5.x variants via ChatGPT OAuth
- Multi-account support — Add up to 20 ChatGPT accounts, health-aware rotation with automatic failover
- Per-project accounts — Each project gets its own account storage (new in v4.10.0)
- Click-to-switch — Switch accounts directly from the OpenCode TUI
- Strict tool validation — Automatically cleans schemas for compatibility with strict models
- Auto-update notifications — Get notified when a new version is available
- 22 model presets — Full variant system with reasoning levels (none/low/medium/high/xhigh)
- Prompt caching — Session-based caching for faster multi-turn conversations
- Usage-aware errors — Friendly messages with rate limit reset timing
- Plugin compatible — Works alongside other OpenCode plugins (oh-my-opencode, dcp, etc.)
---
Terms of Service Warning — Read Before Installing
> [!CAUTION]
> This plugin uses OpenAI's official OAuth authentication (the same method as OpenAI's official Codex CLI) for personal development use with your ChatGPT Plus/Pro subscription.
>
> This plugin is for personal development only:
> - Not for commercial services, API resale, or multi-user applications
> - For production use, see OpenAI Platform API
>
> By using this plugin, you acknowledge:
> - This is an unofficial tool not endorsed by OpenAI
> - Users are responsible for compliance with OpenAI's Terms of Use
> - You assume all risks associated with using this plugin
---
For Humans
Option A: Let an LLM do it
Paste this into any LLM agent (Claude Code, OpenCode, Cursor, etc.):
```
Install the oc-chatgpt-multi-auth plugin and add the OpenAI model definitions to ~/.config/opencode/opencode.json by following: https://raw.githubusercontent.com/ndycode/oc-chatgpt-multi-auth/main/README.md
Option B: One-command install
`bash`
npx -y oc-chatgpt-multi-auth@latest
This writes the config to ~/.config/opencode/opencode.json, backs up existing config, and clears the plugin cache.
> Want legacy config (OpenCode v1.0.209 and below)? Add --legacy flag.
Option C: Manual setup
1. Add the plugin to ~/.config/opencode/opencode.json:
`json`
{
"plugin": ["oc-chatgpt-multi-auth@latest"]
}
2. Login with your ChatGPT account:
`bash`
opencode auth login
3. Add models — Copy the full configuration below
4. Use it:
`bash`
opencode run "Hello" --model=openai/gpt-5.2 --variant=medium
For LLM Agents
1. Edit the OpenCode configuration file at ~/.config/opencode/opencode.json~
> Note: This path works on all platforms. On Windows, resolves to your user home directory (e.g., C:\Users\YourName).
2. Add the plugin to the plugin array:`
json`
{
"plugin": ["oc-chatgpt-multi-auth@latest"]
}
3. Add the model definitions from the Full Models Configuration section
4. Set provider to "openai" and choose a model
`bash`
opencode run "Hello" --model=openai/gpt-5.2 --variant=medium
---
| Model | Variants | Notes |
|-------|----------|-------|
| gpt-5.2 | none, low, medium, high, xhigh | Latest GPT-5.2 with reasoning levels |gpt-5.3-codex
| | low, medium, high, xhigh | Latest GPT-5.3 Codex for code generation (default: xhigh) |gpt-5.1-codex-max
| | low, medium, high, xhigh | Maximum context Codex |gpt-5.1-codex
| | low, medium, high | Standard Codex |gpt-5.1-codex-mini
| | medium, high | Lightweight Codex |gpt-5.1
| | none, low, medium, high | GPT-5.1 base model |
Using variants:
`bashModern OpenCode (v1.0.210+)
opencode run "Hello" --model=openai/gpt-5.2 --variant=high
Full Models Configuration (Copy-Paste Ready)
Add this to your
~/.config/opencode/opencode.json:`json
{
"$schema": "https://opencode.ai/config.json",
"plugin": ["oc-chatgpt-multi-auth@latest"],
"provider": {
"openai": {
"options": {
"reasoningEffort": "medium",
"reasoningSummary": "auto",
"textVerbosity": "medium",
"include": ["reasoning.encrypted_content"],
"store": false
},
"models": {
"gpt-5.2": {
"name": "GPT 5.2 (OAuth)",
"limit": { "context": 272000, "output": 128000 },
"modalities": { "input": ["text", "image", "pdf"], "output": ["text"] },
"variants": {
"none": { "reasoningEffort": "none" },
"low": { "reasoningEffort": "low" },
"medium": { "reasoningEffort": "medium" },
"high": { "reasoningEffort": "high" },
"xhigh": { "reasoningEffort": "xhigh" }
}
},
"gpt-5.3-codex": {
"name": "GPT 5.3 Codex (OAuth)",
"limit": { "context": 272000, "output": 128000 },
"modalities": { "input": ["text", "image", "pdf"], "output": ["text"] },
"variants": {
"low": { "reasoningEffort": "low" },
"medium": { "reasoningEffort": "medium" },
"high": { "reasoningEffort": "high" },
"xhigh": { "reasoningEffort": "xhigh" }
},
"options": {
"reasoningEffort": "xhigh",
"reasoningSummary": "detailed"
}
},
"gpt-5.1-codex-max": {
"name": "GPT 5.1 Codex Max (OAuth)",
"limit": { "context": 272000, "output": 128000 },
"modalities": { "input": ["text", "image", "pdf"], "output": ["text"] },
"variants": {
"low": { "reasoningEffort": "low" },
"medium": { "reasoningEffort": "medium" },
"high": { "reasoningEffort": "high" },
"xhigh": { "reasoningEffort": "xhigh" }
}
},
"gpt-5.1-codex": {
"name": "GPT 5.1 Codex (OAuth)",
"limit": { "context": 272000, "output": 128000 },
"modalities": { "input": ["text", "image", "pdf"], "output": ["text"] },
"variants": {
"low": { "reasoningEffort": "low" },
"medium": { "reasoningEffort": "medium" },
"high": { "reasoningEffort": "high" }
}
},
"gpt-5.1-codex-mini": {
"name": "GPT 5.1 Codex Mini (OAuth)",
"limit": { "context": 272000, "output": 128000 },
"modalities": { "input": ["text", "image", "pdf"], "output": ["text"] },
"variants": {
"medium": { "reasoningEffort": "medium" },
"high": { "reasoningEffort": "high" }
}
},
"gpt-5.1": {
"name": "GPT 5.1 (OAuth)",
"limit": { "context": 272000, "output": 128000 },
"modalities": { "input": ["text", "image", "pdf"], "output": ["text"] },
"variants": {
"none": { "reasoningEffort": "none" },
"low": { "reasoningEffort": "low" },
"medium": { "reasoningEffort": "medium" },
"high": { "reasoningEffort": "high" }
}
}
}
}
}
}
`For legacy OpenCode (v1.0.209 and below), use
config/opencode-legacy.json which has individual model entries like gpt-5.2-low, gpt-5.2-medium, etc.---
Multi-Account Setup
Add multiple ChatGPT accounts for higher combined quotas. The plugin uses health-aware rotation with automatic failover and supports up to 20 accounts.
`bash
opencode auth login # Run again to add more accounts
`---
Account Management Tools
The plugin provides built-in tools for managing your OpenAI accounts. These are available directly in OpenCode — just ask the agent or type the tool name.
> Note: Tools were renamed from
openai-accounts- to codex- in v4.12.0 for brevity.$3
List all configured accounts with their status.
`
codex-list
`Output:
`
OpenAI Accounts (3 total): [1] user@gmail.com (active)
[2] work@company.com
[3] backup@email.com
Use codex-switch to change active account.
`---
$3
Switch to a different account by index (1-based).
`
codex-switch index=2
`Output:
`
Switched to account [2] work@company.com
`---
$3
Show detailed status including rate limits and health scores.
`
codex-status
`Output:
`
OpenAI Account Status:[1] user@gmail.com (active)
Health: 100/100
Rate Limit: 45/50 requests remaining
Resets: 2m 30s
Last Used: 5 minutes ago
[2] work@company.com
Health: 85/100
Rate Limit: 12/50 requests remaining
Resets: 8m 15s
Last Used: 1 hour ago
`---
$3
Show live runtime metrics (request counts, latency, errors, rotations) for the current plugin process.
`
codex-metrics
`Output:
`
Codex Plugin Metrics:Uptime: 12m
Total upstream requests: 84
Successful responses: 77
Failed responses: 7
Average successful latency: 842ms
`---
$3
Check if all account tokens are still valid (read-only check).
`
codex-health
`Output:
`
Checking 3 account(s): ✓ [1] user@gmail.com: Healthy
✓ [2] work@company.com: Healthy
✗ [3] old@expired.com: Token expired
Summary: 2 healthy, 1 unhealthy
`---
$3
Refresh all OAuth tokens and save them to disk. Use this after long idle periods.
`
codex-refresh
`Output:
`
Refreshing 3 account(s): ✓ [1] user@gmail.com: Refreshed
✓ [2] work@company.com: Refreshed
✗ [3] old@expired.com: Failed - Token expired
Summary: 2 refreshed, 1 failed
`Difference from health check:
codex-health only validates tokens. codex-refresh actually refreshes them and saves new tokens to disk.---
$3
Remove an account by index. Useful for cleaning up expired accounts.
`
codex-remove index=3
`Output:
`
Removed: [3] old@expired.comRemaining accounts: 2
`---
$3
Export all accounts to a portable JSON file. Useful for backup or migration.
`
codex-export path="~/backup/accounts.json"
`Output:
`
Exported 3 account(s) to ~/backup/accounts.json
`---
$3
Import accounts from a JSON file (exported via
codex-export). Merges with existing accounts.`
codex-import path="~/backup/accounts.json"
`Output:
`
Imported 2 new account(s) (1 duplicate skipped)Total accounts: 4
`---
$3
| Tool | What It Does | Example |
|------|--------------|---------|
|
codex-list | List all accounts | "list my accounts" |
| codex-switch | Switch active account | "switch to account 2" |
| codex-status | Show rate limits & health | "show account status" |
| codex-metrics | Show runtime metrics | "show plugin metrics" |
| codex-health | Validate tokens (read-only) | "check account health" |
| codex-refresh | Refresh & save tokens | "refresh my tokens" |
| codex-remove | Remove an account | "remove account 3" |
| codex-export | Export accounts to file | "export my accounts" |
| codex-import | Import accounts from file | "import accounts from backup" |---
Rotation Behavior
How rotation works:
- Health scoring tracks success/failure per account
- Token bucket prevents hitting rate limits
- Hybrid selection prefers healthy accounts with available tokens
- Always retries when all accounts are rate-limited (waits for reset with live countdown)
- 20% jitter on retry delays to avoid thundering herd
- Auto-removes accounts after 3 consecutive auth failures (new in v4.11.0)
Per-project accounts (v4.10.0+):
By default, each project gets its own account storage namespace. This means you can keep different active accounts per project without writing account files into your repo. Works from subdirectories too; the plugin walks up to find the project root (v4.11.0). Disable with
perProjectAccounts: false in your config.Storage locations:
- Per-project:
~/.opencode/projects/{project-key}/openai-codex-accounts.json
- Global (when per-project disabled): ~/.opencode/openai-codex-accounts.json---
Troubleshooting
> Quick reset: Most issues can be resolved by deleting
~/.opencode/auth/openai.json and running opencode auth login again.$3
OpenCode uses
~/.config/opencode/ on all platforms including Windows.| File | Path |
|------|------|
| Main config |
~/.config/opencode/opencode.json |
| Auth tokens | ~/.opencode/auth/openai.json |
| Multi-account (global) | ~/.opencode/openai-codex-accounts.json |
| Multi-account (per-project) | ~/.opencode/projects/{project-key}/openai-codex-accounts.json |
| Plugin config | ~/.opencode/openai-codex-auth-config.json |
| Debug logs | ~/.opencode/logs/codex-plugin/ |> Windows users:
~ resolves to your user home directory (e.g., C:\Users\YourName).---
401 Unauthorized Error
Cause: Token expired or not authenticated.
Solutions:
1. Re-authenticate:
`bash
opencode auth login
`
2. Check auth file exists:
`bash
cat ~/.opencode/auth/openai.json
`
Browser Doesn't Open for OAuth
Cause: Port 1455 conflict or SSH/WSL environment.
Solutions:
1. Manual URL paste:
- Re-run
opencode auth login
- Select "ChatGPT Plus/Pro (manual URL paste)"
- Paste the full redirect URL (including #code=...) after login2. Check port availability:
`bash
# macOS/Linux
lsof -i :1455
# Windows
netstat -ano | findstr :1455
`3. Stop Codex CLI if running — Both use port 1455
Model Not Found
Cause: Missing provider prefix or config mismatch.
Solutions:
1. Use
openai/ prefix:
`bash
# Correct
--model=openai/gpt-5.2
# Wrong
--model=gpt-5.2
`2. Verify model is in your config:
`json
{ "models": { "gpt-5.2": { ... } } }
`
Rate Limit Exceeded
Cause: ChatGPT subscription usage limit reached.
Solutions:
1. Wait for reset (plugin shows timing in error message)
2. Add more accounts:
opencode auth login
3. Switch to a different model family
Multi-Turn Context Lost
Cause: Old plugin version or missing config.
Solutions:
1. Update plugin:
`bash
npx -y oc-chatgpt-multi-auth@latest
`
2. Ensure config has:
`json
{
"include": ["reasoning.encrypted_content"],
"store": false
}
`
OAuth Callback Issues (Safari/WSL/Docker)
Safari HTTPS-only mode:
- Use Chrome or Firefox instead, or
- Temporarily disable Safari > Settings > Privacy > "Enable HTTPS-only mode"
WSL2:
- Use VS Code's port forwarding, or
- Configure Windows → WSL port forwarding
SSH / Remote:
`bash
ssh -L 1455:localhost:1455 user@remote
`Docker / Containers:
- OAuth with localhost redirect doesn't work in containers
- Use SSH port forwarding or manual URL flow
---
Plugin Compatibility
$3
Works alongside oh-my-opencode. No special configuration needed.
`json
{
"plugin": [
"oc-chatgpt-multi-auth@latest",
"oh-my-opencode@latest"
]
}
`$3
List this plugin before dcp:
`json
{
"plugin": [
"oc-chatgpt-multi-auth@latest",
"@tarquinen/opencode-dcp@latest"
]
}
`$3
- openai-codex-auth — Not needed. This plugin replaces the original.
---
Configuration
Create
~/.opencode/openai-codex-auth-config.json for optional settings:$3
| Option | Default | What It Does |
|--------|---------|--------------|
|
codexMode | true | Uses Codex-OpenCode bridge prompt (synced with latest Codex CLI) |
| codexTuiV2 | true | Enables Codex-style terminal UI output (set false for legacy output) |
| codexTuiColorProfile | truecolor | Terminal color profile for Codex UI (truecolor, ansi256, ansi16) |
| codexTuiGlyphMode | ascii | Glyph mode for Codex UI (ascii, unicode, auto) |
| fastSession | false | Forces low-latency settings per request (reasoningEffort=none/low, reasoningSummary=off, textVerbosity=low) |
| fastSessionStrategy | hybrid | hybrid speeds simple turns but keeps full-depth on complex prompts; always forces fast tuning on every turn |
| fastSessionMaxInputItems | 30 | Max input items kept when fast tuning is applied |$3
| Option | Default | What It Does |
|--------|---------|--------------|
|
perProjectAccounts | true | Each project gets its own account storage namespace under ~/.opencode/projects/ |
| toastDurationMs | 5000 | How long toast notifications stay visible (ms) |$3
| Option | Default | What It Does |
|--------|---------|--------------|
|
retryAllAccountsRateLimited | true | Wait and retry when all accounts are rate-limited |
| retryAllAccountsMaxWaitMs | 0 | Max wait time (0 = unlimited) |
| retryAllAccountsMaxRetries | Infinity | Max retry attempts |
| fallbackToGpt52OnUnsupportedGpt53 | true | Automatically retry once with gpt-5.2-codex when gpt-5.3-codex is rejected for ChatGPT Codex OAuth entitlement |
| fetchTimeoutMs | 60000 | Request timeout to Codex backend (ms) |
| streamStallTimeoutMs | 45000 | Abort non-stream parsing if SSE stalls (ms) |$3
`bash
DEBUG_CODEX_PLUGIN=1 opencode # Enable debug logging
ENABLE_PLUGIN_REQUEST_LOGGING=1 opencode # Log all API requests
CODEX_PLUGIN_LOG_LEVEL=debug opencode # Set log level (debug|info|warn|error)
CODEX_MODE=0 opencode # Temporarily disable bridge prompt
CODEX_TUI_V2=0 opencode # Disable Codex-style UI (legacy output)
CODEX_TUI_COLOR_PROFILE=ansi16 opencode # Force UI color profile
CODEX_TUI_GLYPHS=unicode opencode # Override glyph mode (ascii|unicode|auto)
CODEX_AUTH_PREWARM=0 opencode # Disable startup prewarm (prompt/instruction cache warmup)
CODEX_AUTH_FAST_SESSION=1 opencode # Enable faster response defaults
CODEX_AUTH_FAST_SESSION_STRATEGY=always opencode # Force fast mode for all prompts
CODEX_AUTH_FAST_SESSION_MAX_INPUT_ITEMS=24 opencode # Tune fast-mode history window
CODEX_AUTH_FALLBACK_GPT53_TO_GPT52=0 opencode # Disable gpt-5.3 -> gpt-5.2 fallback (strict mode)
CODEX_AUTH_FETCH_TIMEOUT_MS=120000 opencode # Override request timeout
CODEX_AUTH_STREAM_STALL_TIMEOUT_MS=60000 opencode # Override SSE stall timeout
``For all options, see docs/configuration.md.
---
- Getting Started — Complete installation guide
- Configuration — All configuration options
- Troubleshooting — Common issues and fixes
- Architecture — How the plugin works
---
- numman-ali/opencode-openai-codex-auth by numman-ali — Original plugin
- ndycode — Multi-account support and maintenance
MIT License. See LICENSE for details.
Legal
- Personal / internal development only
- Respect subscription quotas and data handling policies
- Not for production services or bypassing intended limits
By using this plugin, you acknowledge:
- Terms of Service risk — This approach may violate ToS of AI model providers
- No guarantees — APIs may change without notice
- Assumption of risk — You assume all legal, financial, and technical risks
- Not affiliated with OpenAI. This is an independent open-source project.
- "ChatGPT", "GPT-5", "Codex", and "OpenAI" are trademarks of OpenAI, L.L.C.