Knightcode CLI - Your local AI coding assistant using Ollama, LM Studio, and more
npm install @neuroequalityorg/knightcodeA powerful AI coding assistant CLI tool that helps you write, understand, and debug code using local AI models.
- 🤖 Local AI-powered code assistance - No cloud API keys required
- 🏠 Multiple local providers - Ollama and LM Studio support
- 📝 Code generation and refactoring - Generate code from natural language
- 🔍 Code explanation and documentation - Understand complex codebases
- 🐛 Bug fixing and debugging - AI-powered problem solving
- 💡 Intelligent code suggestions - Context-aware recommendations
- 🔄 Real-time code analysis - Instant feedback on your code
- 🔒 Privacy-focused - Your code stays on your machine
``bash`
npm install -g @neuroequalityorg/knightcode
- Node.js >= 18.0.0
- Either Ollama or LM Studio installed and running locally
New to Knightcode? Start with our Getting Started Guide for a 5-minute setup!
`bashInstall Ollama
curl -fsSL https://ollama.ai/install.sh | sh
$3
`bash
Download LM Studio from https://lmstudio.ai/
Load a model and start the local server
Test Knightcode
knightcode ask "Hello, can you help me with coding?"
`Usage
`bash
Start the CLI
knightcodeAsk a coding question
knightcode ask "How do I implement a binary search tree in TypeScript?"Explain code
knightcode explain path/to/file.tsRefactor code
knightcode refactor path/to/file.ts --focus readabilityFix bugs
knightcode fix path/to/file.ts --issue "Infinite loop in the sort function"Generate code
knightcode generate "a REST API server with Express" --language TypeScriptUse specific AI provider
knightcode --provider ollama --model devstral:24b ask "How do I implement authentication?"
`Commands
$3
- ask - Ask questions about code or programming
- explain - Get explanations of code files or snippets
- refactor - Refactor code for better readability or performance
- fix - Fix bugs or issues in code
- generate - Generate code based on a prompt$3
- config - View or edit configuration settings
- login - Log in to Knightcode (for cloud features)
- logout - Log out and clear stored credentialsAI Providers
Knightcode supports multiple local AI providers:
$3
- Default provider - Easy to set up and use
- Recommended models: devstral:24b, codellama:7b, llama3.2:3b
- Port: 11434 (default)
- Best for: Most users, good balance of speed and quality$3
- Alternative provider - More control over models
- Port: 1234 (default)
- Best for: Users who want to experiment with different models$3
- Fallback option - Requires API key
- Best for: When local models aren't sufficientConfiguration
Knightcode can be configured through:
1. Configuration file (
.knightcode.json) - Recommended
2. Environment variables - For automation
3. Command line arguments - For one-time use$3
Create
.knightcode.json in your project directory:`json
{
"ai": {
"provider": "ollama",
"model": "devstral:24b",
"temperature": 0.7,
"maxTokens": 4096
},
"terminal": {
"theme": "system",
"useColors": true
}
}
`$3
`bash
export KNIGHTCODE_AI_PROVIDER=ollama
export KNIGHTCODE_AI_MODEL=devstral:24b
`Performance Tips
- Smaller models (3B-7B): Faster responses, good for simple tasks
- Larger models (13B-70B): Better quality, slower responses
- Memory: Ensure you have enough RAM for your chosen model
- GPU: Models run faster with GPU acceleration (if supported)
Troubleshooting
$3
1. Connection failed: Make sure your AI service is running
2. Model not found: Download/pull the model first
3. Slow responses: Try a smaller model or check your hardware
4. Memory errors: Reduce model size or increase available RAM
$3
`bash
Check configuration
knightcode configTest connection
knightcode ask "Hello"View logs
knightcode --verbose ask "Hello"
`Development
`bash
Clone the repository
git clone https://github.com/neuroequalityorg/knightcode.git
cd knightcodeInstall dependencies
npm installBuild the project
npm run buildRun in development mode
npm run devRun tests
npm test
``Contributions are welcome! Please read our Contributing Guide for details on our code of conduct and the process for submitting pull requests.
This project is licensed under the MIT License - see the LICENSE file for details.
- 🚀 Getting Started: GETTING_STARTED.md - 5-minute setup guide
- 📖 Detailed Setup: SETUP_LOCAL_AI.md - Comprehensive configuration guide
- 🐛 Issues: Report bugs on GitHub
- 💬 Discussions: Join community discussions
- ⭐ Star: If this project helps you, consider giving it a star!