RLM PRO - Enterprise-grade Recursive Language Models for infinite context code analysis. Analyze entire codebases with AI.
npm install @superadnim/rlm-proAnalyze any corpus of unstructured data using Recursive Language Models - enables LLMs to handle near-infinite context through recursive decomposition.
Works with codebases, document collections, research papers, logs, and any text-based data corpus.
Based on the RLM research from MIT OASYS lab.
``bashUsing npx (recommended - auto-installs Python package from GitHub)
npx @superadnim/rlm-pro ./my-project -q "Explain the architecture"
$3
- Node.js 18+
- uv (Python package manager) - will be prompted to install if missing
- OpenAI API key (set as environment variable)
`bash
Install uv if not already installed
curl -LsSf https://astral.sh/uv/install.sh | shInstall the Python package (auto-installed on first run, or install manually)
uv pip install git+https://github.com/CG-Labs/RLM-PRO.gitSet your API key
export OPENAI_API_KEY="your-key"
`Usage
$3
`bash
Basic usage
npx @superadnim/rlm-pro ./my-project -q "Explain the architecture"Get JSON output (for programmatic use)
npx @superadnim/rlm-pro ./my-project -q "List all API endpoints" --jsonUse a specific model
npx @superadnim/rlm-pro ./my-project -q "Find potential bugs" -m gpt-5.2Use Anthropic backend
npx @superadnim/rlm-pro ./my-project -q "Review this code" -b anthropicVerbose output for debugging
npx @superadnim/rlm-pro ./my-project -q "How does authentication work?" -vOnly build context (no LLM call)
npx @superadnim/rlm-pro ./my-project -q "" --context-only
`$3
`javascript
const { analyzeCodebase } = require('@superadnim/rlm-pro');async function main() {
const result = await analyzeCodebase('./my-project', {
query: 'Summarize the codebase structure',
backend: 'openai',
model: 'gpt-5.2',
});
console.log(result.response);
console.log('Execution time:', result.execution_time, 'seconds');
}
main();
`Options
| Option | Description | Default |
|--------|-------------|---------|
|
-q, --query | Question or task to perform (required) | - |
| -b, --backend | LLM backend (openai, anthropic, etc.) | openai |
| -m, --model | Model name | gpt-5.2 |
| -e, --env | Execution environment (local, docker) | local |
| --max-depth | Maximum recursion depth | 1 |
| --max-iterations | Maximum iterations | 30 |
| --max-file-size | Max size per file | 100000 |
| --max-total-size | Max total context size | 500000 |
| --no-tree | Exclude directory tree from context | - |
| --json | Output as JSON | - |
| -v, --verbose | Enable verbose output | - |
| --context-only | Only output built context | - |Environment Variables
| Variable | Description | Required |
|----------|-------------|----------|
|
OPENAI_API_KEY | OpenAI API key | Yes (for OpenAI backend) |
| ANTHROPIC_API_KEY | Anthropic API key | For Anthropic backend |How It Works
RLM (Recursive Language Models) enables LLMs to handle near-infinite context by:
1. Context Building: Intelligently reads and formats your codebase
2. Recursive Decomposition: Breaks complex queries into manageable sub-tasks
3. Code Execution: Runs Python code in a sandboxed environment to explore and analyze
4. Iterative Refinement: Continues until a complete answer is found
This allows answering complex questions about large codebases that would exceed normal context limits.
Examples
$3
`bash
npx @superadnim/rlm-pro ./backend -q "Describe the system architecture and key design patterns"
`$3
`bash
npx @superadnim/rlm-pro ./src -q "Find potential security vulnerabilities" --json
`$3
`bash
npx @superadnim/rlm-pro ./api -q "Generate API documentation for all endpoints"
`$3
`bash
npx @superadnim/rlm-pro ./feature-branch -q "Review this code for best practices"
``MIT