A CLI tool for batch processing questions using OpenAI Codex API
npm install @kakugo-ch/codex-batch-cliA robust CLI tool for batch processing questions using OpenAI's CLI tools. Process multiple questions at once, handle failures gracefully, and generate comprehensive reports.
- Batch Processing: Handle multiple questions in a single run
- Smart Retry: Automatically retry failed queries with --retry flag
- Continuous Processing: Automatically continues from where it left off if interrupted
- Parallel Processing: Process multiple questions concurrently with configurable concurrency
- Detailed Reports: Generate structured JSON reports with answers and errors
- Template Support: Define reusable templates with variables for common question patterns
- Error Handling: Robust error handling with configurable timeouts and retries
- Automatic Security: Sensitive data like API keys and tokens are automatically masked in logs and error messages
- Easy to Use: Simple CLI interface with clear commands
- Flexible CLI Support: Works with different OpenAI CLI tools (codex, open-codex, etc.)
1. Install your preferred OpenAI CLI tool:
``bash`
# For example, install OpenAI's Codex CLI:
npm install -g @openai/codex
# Or install open-codex:
npm install -g @openai/open-codex
2. Install this CLI:
`bash`
npm install -g @kakugo-ch/codex-batch-cli
3. Set your OpenAI API key:
`bash`
export OPENAI_API_KEY="your_api_key_here"
4. Create a questions.json file. You can use direct questions or templates:
`json`
{
"templates": [
{
"id": "code-review",
"content": "Please review the following {{ language }} code:\n\n{{ language }}\n{{ code }}\n`\n\nFocus on: {{ focus }}"`
}
],
"questions": [
{
"id": "q1",
"template": "code-review",
"variables": {
"language": "javascript",
"code": "function add(a,b) { return a+b }",
"focus": "code style and type safety"
}
},
{
"id": "q2",
"text": "What is the language of this repository?"
}
]
}
5. Run the CLI:
`bash`
codex-batch -i questions.json -o report.json
- Node.js >= 14.0.0
- OpenAI API key (set as OPENAI_API_KEY environment variable)npm install -g @openai/codex
- OpenAI CLI tool ( or npm install -g @openai/open-codex)
bash
codex-batch -i questions.json -o report.json
`$3
`bash
codex-batch --retry report.json
`$3
- -i, --input - Input questions file
- -o, --output - Output report file (default: report.json)
- -r, --repo - Path to the repository to analyze (default: current directory)
- --retry - Retry failed questions from a report
- --max-retries - Maximum retries for failed queries (default: 3)
- --timeout - Timeout in milliseconds (default: 600000)
- --concurrency - Number of questions to process in parallel (default: 1)
- --cli-args - Additional arguments to pass to the CLI (e.g. "--temperature 0.7")
- --executable-name - Name of the CLI executable to use (default: "codex")
- -h, --help - Show helpInput Format
The input file can contain both templates and direct questions:
$3
Templates use Liquid syntax and can be defined in the templates array:
`json
{
"templates": [
{
"id": "template-id",
"content": "Template content with {{ variable }} placeholders"
}
]
}
`$3
Questions can either use a template or provide direct text:
`json
{
"questions": [
{
"id": "question-1",
"template": "template-id",
"variables": {
"variable": "value"
}
},
{
"id": "question-2",
"text": "Direct question without template"
}
]
}
`Examples
$3
`bash
Using default codex CLI sequentially
codex-batch -i questions.json -o report.jsonUsing codex CLI with parallel processing (10 questions at a time)
codex-batch -i questions.json -o report.json --concurrency 10High concurrency for large batches (50 questions at a time)
codex-batch -i questions.json -o report.json --concurrency 50Using open-codex CLI
codex-batch -i questions.json -o report.json --executable-name open-codex
`If the command is interrupted (e.g., by Ctrl+C), running it again with the same output file will:
1. Skip questions that were already successfully processed
2. Retry questions that failed
3. Process remaining questions that weren't attempted yet
$3
`bash
codex-batch -i questions.json -r /path/to/repo
`$3
`bash
Using codex with custom parameters
codex-batch -i questions.json --cli-args "--temperature 0.7 --max-tokens 100"Using open-codex with custom parameters
codex-batch -i questions.json --executable-name open-codex --cli-args "--temperature 0.7 --max-tokens 100"
`$3
`bash
codex-batch --retry report.json --cli-args "--temperature 0.9"
`Report Format
`json
{
"timestamp": "2025-04-27T19:00:00.000Z",
"totalQuestions": 2,
"successfulQueries": 1,
"failedQueries": 1,
"answers": [
{
"questionId": "q1",
"question": "What is the language of this repository?",
"answer": "This repository is written in TypeScript.",
"startTime": "2025-04-27T18:59:58.123Z",
"endTime": "2025-04-27T19:00:00.000Z",
"durationMs": 1877
},
{
"questionId": "q2",
"question": "Is this repository using SQL databases?",
"error": "Timeout error",
"startTime": "2025-04-27T19:00:00.001Z",
"endTime": "2025-04-27T19:00:30.001Z",
"durationMs": 30000
}
],
"startTime": "2025-04-27T18:59:58.123Z",
"endTime": "2025-04-27T19:00:30.001Z",
"durationMs": 31878,
"fastestAnswerMs": 1877,
"slowestAnswerMs": 30000,
"averageAnswerMs": 15938
}
`Development
After making changes:
`bash
npm run build && npm test && npm link
`Release
``bash