AI-powered shell assistance for zsh
npm install zsh-cli-aiAI-powered shell assistance for zsh. Convert comments to commands, explain commands, get intelligent autocomplete suggestions, and fix failed commands - all with simple keybindings.
No API keys required. Unlike similar tools (fish-ai, zsh-ai), this leverages existing CLI tools you already have authenticated - Claude Code, Codex CLI, or Ollama for fully local inference. If you're already using these tools, zsh-cli-ai works out of the box.
Inspired by fish-ai, built for zsh/oh-my-zsh users.
| Feature | Keybinding | Description |
|---------|------------|-------------|
| Codify | Ctrl+E | Convert # comment to a shell command |
| Explain | Ctrl+E | Explain what a command does |
| Fix | Ctrl+E | Fix the last failed command |
| Autocomplete | Alt+A | Get AI-powered completions via fzf |
Ctrl+E is context-aware and automatically picks the right action:
| Buffer State | Action |
|--------------|--------|
| Empty + failed command exists | Fix the failed command |
| # find large files | Codify → convert to find . -size +100M |
| find . -size +100M | Explain → show what it does |
- Node.js 18+
- fzf (for autocomplete)
- One of:
- Ollama (local LLM - private, no API keys needed)
- Claude Code (npm install -g @anthropic-ai/claude-code)
- Codex CLI (npm install -g @openai/codex)
``bash`
npm install -g zsh-cli-ai
zsh-cli-ai init # Interactive: choose backend (codex/claude/ollama)
exec zsh
Or specify backend directly:
`bash`
zsh-cli-ai init ollama # Use Ollama (local, private)
zsh-cli-ai init claude # Use Claude Code
zsh-cli-ai init codex # Use Codex
`bash`
zsh-cli-ai doctor
zsh-cli-ai supports three AI backends:
| Backend | Models | Characteristics |
|---------|--------|-----------------|
| Ollama | gemma3, llama3, qwen3, phi3, etc. | Local inference, 100% private, no API keys |
| Claude Code | opus, sonnet, haiku | Subprocess per request, ~1-2s latency |
| Codex | OpenAI models | Persistent MCP server, fast responses |
`bashInstall Ollama (https://ollama.ai)
brew install ollama # macOSor download from ollama.ai
$3
`bash
Show current backend
zsh-cli-ai backendSwitch backends
zsh-cli-ai backend ollama # Local LLM
zsh-cli-ai backend claude # Claude Code
zsh-cli-ai backend codex # Codex
`The daemon automatically restarts when you switch backends.
$3
`bash
Show current model and available Ollama models
zsh-cli-ai modelSwitch to a different model
zsh-cli-ai model llama3.2:3b
zsh-cli-ai model gemma3:4bReset to backend default (gemma3:4b for Ollama)
zsh-cli-ai model clear
`The daemon automatically restarts when you switch models.
> Tip: Choose non-reasoning models for best results. Reasoning models (like QwQ, DeepSeek-R1) output thinking tokens that pollute the response. Fast, instruction-following models like
gemma3:4b, llama3.2:3b, or phi3:3.8b work best for shell tasks.$3
Configuration is stored in
~/.config/zsh-cli-ai/config.json (XDG compliant).You can also override via environment variable (read at daemon startup):
`bash
ZSH_AI_BACKEND=claude zsh-cli-ai start
`Usage
$3
Ctrl+E - Smart AI (context-aware)
`
Type a comment, press Ctrl+E to convert to command:
list all files larger than 100mb
→ find . -type f -size +100MType a command, press Ctrl+E to explain it:
find . -type f -size +100M
→ "Find all regular files larger than 100MB in the current directory"After a command fails, press Ctrl+E on empty line to fix:
$ git pish # typo, exits with error
$
→ git push
`
Alt+A - AI Autocomplete
`
Type partial command, press Alt+A for suggestions:
git sta
→ fzf menu with: git status, git stash, git stash list, etc.
`$3
`bash
AI commands
zsh-cli-ai codify "# find python files modified today"
zsh-cli-ai explain "tar -xzvf archive.tar.gz"
zsh-cli-ai complete "docker "
zsh-cli-ai fix "git pish" 1Backend and model management
zsh-cli-ai backend # Show current backend
zsh-cli-ai backend claude # Switch to Claude (auto-restarts daemon)
zsh-cli-ai model # Show current model and available models
zsh-cli-ai model qwen3:4b # Switch model (auto-restarts daemon)
zsh-cli-ai model clear # Reset to default modelHistory context (opt-in for fix command)
zsh-cli-ai history # Show if history context is enabled
zsh-cli-ai history on # Enable sending recent commands for fix context
zsh-cli-ai history off # Disable (default)Daemon management
zsh-cli-ai start # Start the background daemon
zsh-cli-ai stop # Stop the daemon
zsh-cli-ai status # Check daemon status and backend
zsh-cli-ai doctor # Verify dependencies and backends
`Configuration
Configure via zstyle in your
.zshrc:`zsh
Disable the plugin
zstyle ':zsh-cli-ai:*' enabled 'no'Custom keybindings
zstyle ':zsh-cli-ai:keybind' smart '^E' # Default: Ctrl+E
zstyle ':zsh-cli-ai:keybind' complete '^[a' # Default: Alt+A (ESC-a)Add custom redaction patterns (comma-separated regexes)
zstyle ':zsh-cli-ai:redact' extra-patterns 'my-secret-pattern,company-internal-.*'
`For history context, use the CLI command instead:
zsh-cli-ai history on$3
`bash
Override the AI backend (read at daemon startup)
export ZSH_AI_BACKEND="ollama" # or "claude" or "codex"Override the AI model (or use: zsh-cli-ai model )
export ZSH_AI_MODEL="gemma3:4b" # ollama: any installed model
# claude: opus/sonnet/haiku
# codex: gpt-4o etc.Custom timeout in milliseconds (default: 30000)
export ZSH_AI_TIMEOUT="60000"Ollama server address (default: localhost:11434)
export OLLAMA_HOST="localhost:11434"
`Privacy & Security
$3
Before sending any input to the AI (commands, comments, and history if enabled), sensitive data is automatically redacted:
- API keys and tokens (
api_key=..., OPENAI_API_KEY=...)
- AWS credentials (AKIA..., aws_secret_access_key)
- Private keys (-----BEGIN PRIVATE KEY-----)
- Bearer tokens
- Database URLs with credentials
- JWT tokens
- GitHub tokens (ghp_..., ghs_...)> Note: These patterns are not foolproof. Be mindful of what you type if not running locally - unusual secret formats or company-specific patterns may not be caught. Add custom patterns for your environment (see below).
$3
Add your own patterns via zstyle:
`zsh
Single pattern
zstyle ':zsh-cli-ai:redact' extra-patterns 'my-secret-.*'Multiple patterns (comma-separated)
zstyle ':zsh-cli-ai:redact' extra-patterns 'pattern1,pattern2,pattern3'
`$3
By default, command history is never sent to the AI. You can enable it for better fix suggestions:
`bash
zsh-cli-ai history on # Enable
zsh-cli-ai history off # Disable (default)
zsh-cli-ai history # Show current status
`When enabled, the last 3 commands are sent with fix requests to provide context. These commands are redacted using the same patterns as all other input (API keys, tokens, etc. are stripped before sending).
$3
All requests have a configurable timeout (default 30s). The shell is never blocked - if the AI doesn't respond, you get an error message and can continue working.
`bash
export ZSH_AI_TIMEOUT="60000" # 60 seconds (useful for slower local models)
`$3
`zsh
zstyle ':zsh-cli-ai:*' enabled 'no'
`Development
`bash
Clone and install
git clone https://github.com/Bigsy/zsh-cli-ai
cd zsh-cli-ai
npm installBuild
npm run buildWatch mode
npm run devLink for local testing
npm link
`Troubleshooting
$3
`bash
zsh-cli-ai doctor # Check all dependencies
zsh-cli-ai stop # Stop any stuck daemon
zsh-cli-ai start # Start fresh
`$3
`bash
zsh-cli-ai status # Check which backend is running
zsh-cli-ai backend # Check configured vs running backend
zsh-cli-ai backend claude # Switch and auto-restart
`$3
Some terminals intercept Alt keys. Try:
- iTerm2: Preferences → Profiles → Keys → Left Option Key → Esc+
- Terminal.app: May need to use
Esc then a instead of Alt+AOr remap to a different key:
`zsh
zstyle ':zsh-cli-ai:keybind' complete '^X^A' # Ctrl+X Ctrl+A instead
`$3
`bash
Check if Ollama is running
curl http://localhost:11434/api/versionStart Ollama if not running
ollama serveCheck installed models
ollama listPull a model if none installed
ollama pull gemma3:4bRun doctor to verify
zsh-cli-ai doctor
`If using a custom Ollama host:
`bash
export OLLAMA_HOST="192.168.1.100:11434"
zsh-cli-ai stop && zsh-cli-ai start
``MIT