Claude Code with LM Studio - Use official Claude Code CLI with local LM Studio models
npm install cclm> Use the official Claude Code CLI with local LM Studio models from anywhere in your terminal
✅ Official Claude Code Interface - Full compatibility with Claude Code CLI
✅ Local LM Studio Models - Use any model loaded in LM Studio
✅ Global Command - Run cclm from any directory
✅ Auto Proxy Management - Automatic start/stop of translation proxy
✅ Zero Configuration - Works out of the box with sensible defaults
✅ Configurable - Easy configuration via CLI commands
1. LM Studio - Download from https://lmstudio.ai/
2. Node.js 18+ - Download from https://nodejs.org/
``bash`
npm install -g cclm
`bash`
git clone https://github.com/yourusername/cclm
cd cclm
npm install
npm link
1. Start LM Studio
- Open LM Studio
- Load a model (e.g., Granite 4.0 H 1B, Llama 3.1 8B)
- Go to "Local Server" tab
- Click "Start Server"
2. Run CCLM
`bash`
cclm
That's it! You're now using Claude Code with your local LM Studio model.
`bashStart interactive session
cclm
$3
`bash
Show current configuration
cclm config showSet LM Studio model
cclm config set lmStudioModel granite-4.0-h-1bSet LM Studio URL (if using custom port)
cclm config set lmStudioUrl http://localhost:1234/v1Enable debug mode
cclm config set debug trueChange proxy port
cclm config set proxyPort 3001Disable auto-start proxy
cclm config set autoStartProxy false
`$3
| Key | Default | Description |
|-----|---------|-------------|
|
lmStudioUrl | http://localhost:1234/v1 | LM Studio API URL |
| lmStudioModel | local-model | Model name in LM Studio |
| proxyPort | 3000 | Port for proxy server |
| debug | false | Enable debug logging |
| autoStartProxy | true | Auto-start proxy if not running |How It Works
`
┌─────────────┐ ┌──────────┐ ┌───────────┐
│ cclm CLI │ ─────>│ Proxy │ ─────>│ LM Studio │
│ (Official) │ API │ Server │ API │ Server │
└─────────────┘ └──────────┘ └───────────┘
Anthropic Translator OpenAI format
format
`CCLM consists of three parts:
1. CCLM Wrapper (
cclm command) - Manages setup and launches components
2. Proxy Server - Translates between Anthropic API and LM Studio (OpenAI) API
3. Claude Code CLI - Official Anthropic CLI (installed as dependency)Features in Detail
$3
All official Claude Code features work:
- ✅ File operations (Read, Write, Edit)
- ✅ Search (Glob, Grep)
- ✅ Terminal commands (Bash)
- ✅ Task management
- ✅ Agents and plugins
- ✅ Streaming responses
- ✅ Function/tool calling
- ✅ MCP servers
- ✅ Slash commands
$3
The proxy server starts automatically when needed:
`bash
First run - proxy starts automatically
$ cclm[1/3] Checking LM Studio... ✓
[2/3] Checking proxy server... starting...
✓ Proxy server started
[3/3] Starting Claude Code... ✓
`On subsequent runs, if the proxy is still running:
`bash
$ cclm[1/3] Checking LM Studio... ✓
[2/3] Checking proxy server... ✓
[3/3] Starting Claude Code... ✓
`$3
Your configuration is saved to
config.json in the package directory and persists across updates.Examples
$3
`bash
Configure once
cclm config set lmStudioModel llama-3.1-70bThen just run
cclm
`$3
`bash
Enable debug logging
cclm config set debug trueRun with debug output
cclm
`$3
`bash
If LM Studio uses a different port
cclm config set lmStudioUrl http://localhost:8080/v1Or if you want proxy on different port
cclm config set proxyPort 8000
`$3
CCLM works from any directory:
`bash
Navigate to your project
cd ~/projects/my-appRun cclm
cclmClaude Code operates in current directory
`Troubleshooting
$3
Error:
LM Studio is not running or not accessibleSolution:
1. Verify LM Studio is running
2. Check that local server is enabled
3. Test manually:
curl http://localhost:1234/v1/models
4. If using different port: cclm config set lmStudioUrl http://localhost:YOUR_PORT/v1$3
Error:
Failed to start proxy serverSolution:
1. Check if port 3000 is available:
netstat -ano | findstr :3000 (Windows) or lsof -ti:3000 (Mac/Linux)
2. Use different port: cclm config set proxyPort 3001
3. Enable debug: cclm config set debug true and check logs$3
Error:
cclm: command not foundSolution:
`bash
Verify installation
npm list -g cclmReinstall globally
npm install -g cclmOr link locally
cd cclm
npm link
`$3
Solution:
1. Verify model name matches LM Studio:
cclm config set lmStudioModel your-exact-model-name
2. Check LM Studio logs for errors
3. Try smaller model or longer timeout
4. Enable debug mode to see requests: cclm config set debug trueAdvanced Usage
$3
CCLM works with any OpenAI-compatible API:
`bash
Ollama
cclm config set lmStudioUrl http://localhost:11434/v1LocalAI
cclm config set lmStudioUrl http://localhost:8080/v1Text Generation WebUI
cclm config set lmStudioUrl http://localhost:5000/v1
`$3
`javascript
import { spawn } from 'child_process';const cclm = spawn('cclm', ['--model', 'my-model'], {
env: process.env,
stdio: 'inherit'
});
`$3
`bash
Save current config
cp ~/.cclm/config.json ~/.cclm/config-granite.jsonSwitch configs
cp ~/.cclm/config-llama.json ~/.cclm/config.json
cclm
`Comparison with Alternatives
| Feature | CCLM | Custom CLI | API Direct |
|---------|------|------------|------------|
| Official CLI | ✅ | ❌ | ❌ |
| Full Features | ✅ | ⚠️ Limited | ❌ |
| Auto Updates | ✅ | ❌ | ❌ |
| Easy Setup | ✅ | ⚠️ Medium | ❌ |
| LM Studio | ✅ | ✅ | ✅ |
| Global Command | ✅ | ⚠️ Manual | N/A |
Performance
Typical performance with Granite 4.0 H 1B (1M context):
- Startup: ~3-5 seconds
- First response: ~2-5 seconds
- Streaming: Real-time, word-by-word
- Tool execution: <100ms proxy overhead
- Memory: ~50MB (proxy) + ~2GB (LM Studio model)
Security & Privacy
- ✅ Runs 100% locally - no data leaves your machine
- ✅ No telemetry or tracking
- ✅ Proxy only accessible on localhost
- ✅ Dummy API key (not used externally)
Development
$3
`bash
git clone
cd cclm-package
npm install
npm link
`$3
`bash
npm test
`$3
`bash
Enable debug mode
cclm config set debug trueOr set environment variable
DEBUG=true cclm
`Changelog
$3
- Official Claude Code CLI integration
- LM Studio proxy server
- Auto proxy management
- Configuration system
- Global command installation
License
MIT
Credits
- Claude Code - Official CLI by Anthropic
- LM Studio - Local LLM hosting platform
- CCLM - Integration layer and proxy server
Support
For issues, questions, or contributions:
1. Check this README
2. Enable debug mode:
cclm config set debug true`---
Enjoy using Claude Code with LM Studio! 🚀