NeoHub CLI - AI code assistant in your terminal
npm install @fsfalmansour/neohub-cli> AI-powered code assistant in your terminal using local Ollama models


Privacy-first AI coding assistant that runs 100% locally. No cloud, no API keys, no data sent anywhere.
- š 100% Private - All processing happens locally on your machine
- ā” Lightning Fast - No API latency, instant responses
- š§ Smart Model Selection - AI-powered Model Supervisor recommends the best model for each task
- š Powerful Models - DeepSeek Coder 33B, CodeLlama 34B, and more
- š¬ Interactive Chat - Conversational AI assistance
- āļø Code Editing - AI-powered file modifications
- š Code Analysis - Review, explain, security, performance analysis
- Node.js 18+
- Ollama - Install from ollama.ai
``bashInstall globally
npm install -g @fsfalmansour/neohub-cli
$3
`bash
Initialize configuration
neohub initStart chatting with AI
neohub chat
`š Commands
$3
Start an interactive chat session with AI`bash
neohub chat
`Example:
`
You: Explain async/await in JavaScript
AI: Async/await is syntactic sugar for promises...
`$3
Edit files with AI assistance`bash
neohub edit -f app.js -i "add error handling"
`Example:
`bash
Add error handling to a function
neohub edit -f server.js -i "add try-catch to all async functions"Refactor code
neohub edit -f utils.js -i "convert to TypeScript"Create backup first
neohub edit -f config.js -i "add validation" --backup
`$3
Analyze code for issues, explanations, or improvements`bash
neohub analyze [--type review|explain|security|performance]
`Examples:
`bash
Code review
neohub analyze src/app.js --type reviewSecurity analysis
neohub analyze . --type securityPerformance analysis
neohub analyze lib/ --type performanceExplain code
neohub analyze components/Header.tsx --type explain
`$3
List available Ollama models`bash
neohub models
`Output:
`
š¦ Available Modelsā deepseek-coder:33b (17.53 GB)
ā codellama:34b (17.74 GB)
ā qwen2.5-coder:1.5b (0.92 GB)
`$3
Get intelligent model recommendations`bash
neohub recommend
`The Model Supervisor analyzes:
- Task type (code generation, review, debugging, etc.)
- Task complexity
- Available models
- Performance history
Recommends the best model for your specific task!
$3
Show current configuration`bash
neohub config
`$3
Generate shell completion script for tab completion`bash
Auto-detect your shell
neohub completionSpecify shell type
neohub completion --shell bash
neohub completion --shell zsh
neohub completion --shell fish
`Enable autocomplete:
`bash
Bash - add to ~/.bashrc or ~/.bash_profile
eval "$(neohub completion --shell bash)"Zsh - add to ~/.zshrc
eval "$(neohub completion --shell zsh)"Fish - add to ~/.config/fish/config.fish
neohub completion --shell fish | source
`After enabling, you can:
- Press TAB to complete commands:
neohub ch ā neohub chat
- Press TAB to complete options: neohub analyze --type ā shows review explain security performance
- Press TAB to complete file paths: neohub edit -f ā shows available files$3
View usage statistics and analytics`bash
View analytics dashboard
neohub analyticsExport analytics data
neohub analytics --exportClear analytics data
neohub analytics --clearDisable/enable tracking
neohub analytics --disable
neohub analytics --enable
`Shows:
- Total commands executed
- Success rate
- Average response time
- Most used commands
- Model performance metrics
Privacy: All analytics stored locally, never sent to cloud
$3
Search for code patterns across your project`bash
Basic search
neohub search "function"Case-sensitive search
neohub search "MyClass" --case-sensitiveRegex search
neohub search "class\s+\w+" --regexSearch with context
neohub search "TODO" --context-lines 5Limit results
neohub search "import" --max-results 20
`Options:
-
-i, --case-sensitive - Case sensitive search
- -w, --whole-word - Match whole words only
- -r, --regex - Use regex pattern
- -p, --path - Directory to search in
- -m, --max-results - Maximum results (default: 100)
- -c, --context-lines - Context lines (default: 2)Output:
`json
{
"ollama": {
"baseUrl": "http://localhost:11434",
"model": "deepseek-coder:33b",
"timeout": 60000
},
"preferences": {
"autoContext": true,
"maxContextFiles": 10
}
}
`šÆ Model Supervisor
NeoHub includes an intelligent Model Supervisor that automatically recommends the best model for each task:
Task-based Recommendations:
- š Code Generation ā DeepSeek Coder 33B (better at generating new code)
- š Code Review ā CodeLlama 34B (trained on review patterns)
- ā»ļø Refactoring ā DeepSeek Coder 33B (understands structure)
- š Debugging ā CodeLlama 34B (better at finding issues)
- š Code Explanation ā CodeLlama 34B (natural language strength)
- šļø Architecture ā DeepSeek Coder 33B (system design)
š§ Configuration
Config file location:
~/.config/configstore/neohub.json$3
`bash
Edit config file or use init
neohub init
`$3
Edit config file:
`json
{
"ollama": {
"model": "codellama:34b"
}
}
`š¦ Supported Models
NeoHub works with any Ollama model:
Recommended for Coding:
-
deepseek-coder:33b - Best for code generation
- codellama:34b - Best for code review/explanation
- qwen2.5-coder:1.5b - Lightweight, fastInstall Models:
`bash
ollama pull deepseek-coder:33b
ollama pull codellama:34b
`š Use Cases
$3
`bash
neohub chat
> How do I implement JWT authentication in Express?
`$3
`bash
neohub analyze src/ --type security
`$3
`bash
neohub edit -f *.js -i "convert var to const/let"
`$3
`bash
neohub analyze node_modules/react/index.js --type explain
``- Node.js: 18+
- Ollama: Latest version
- Disk Space: 2-20GB (depends on models)
- RAM: 8GB minimum (16GB+ recommended for 33B models)
Typical Response Times:
- Code completion: <1s
- Code review: 2-5s
- Complex refactoring: 5-10s
Times vary based on model size and hardware
- GitHub: fahadalmansour/NeoHub
- npm: @fsfalmansour/neohub-cli
- Issues: Report a bug
MIT Ā© 2025 Fahad Almansour
Built with:
- Ollama - Local LLM runtime
- Commander.js - CLI framework
- Inquirer.js - Interactive prompts
---
Made with ā¤ļø for developers who value privacy