CLI that generates Conventional Commit messages from staged git changes using LLMs (OpenAI, Anthropic, Ollama, Groq).
npm install @vavasilva/git-commit-ai


> Generate Conventional Commit messages from staged git changes using LLMs (Ollama, llama.cpp, OpenAI, Anthropic, Groq).
git-commit-ai is a CLI that analyzes your git diff --staged and suggests high-quality Conventional Commits (type(scope): subject) with an interactive confirm/edit/regenerate flow.
Backends: Ollama (local), llama.cpp (local), OpenAI (GPT models), Anthropic (Claude), Groq (Llama)
!Demo
``bashInstall
npm install -g @vavasilva/git-commit-ai
How it works
1. You stage your changes (
git add ...)
2. git-commit-ai reads git diff --staged
3. A selected LLM backend proposes a Conventional Commit message
4. You confirm, edit, regenerate, or abort (no commit happens until you confirm)Features
- Multiple Backends - Ollama (local), llama.cpp (local), OpenAI, Anthropic Claude, Groq
- Auto-Detection - Automatically selects available backend
- Conventional Commits - Generates
type(scope): subject format (Karma compatible)
- Interactive Flow - Confirm, Edit, Regenerate, or Abort before committing
- Individual Commits - Option to commit each file separately
- Dry Run - Preview messages without committing
- Git Hook - Auto-generate messages on git commit
- Summarize - Preview changes in plain English before committing
- Debug Mode - Troubleshoot LLM responses
- Configurable - Customize model, temperature, and more via config fileInstallation
`bash
Requires Node.js 20+
npm install -g @vavasilva/git-commit-ai
`$3
Choose at least one backend:
Ollama (Local, Free)
`bash
macOS
brew install ollama
brew services start ollamaLinux
curl -fsSL https://ollama.com/install.sh | sh
sudo systemctl start ollamaWindows - download installer from:
https://ollama.com/download/windows
Pull a model (all platforms)
ollama pull llama3.1:8b
`llama.cpp (Local, Free, Low Memory)
Run local GGUF models with
llama-server (auto-detected on port 8080):`bash
Install llama.cpp
macOS
brew install llama.cppLinux (Ubuntu/Debian) - build from source
sudo apt install build-essential cmake
git clone https://github.com/ggml-org/llama.cpp && cd llama.cpp
cmake -B build && cmake --build build --config Release
sudo cp build/bin/llama-server /usr/local/bin/Windows - download pre-built binaries from:
https://github.com/ggml-org/llama.cpp/releases
Start the server (downloads model automatically from Hugging Face)
llama-server -hf Qwen/Qwen2.5-Coder-1.5B-Instruct-GGUF -ngl 99 --port 8080Use with git-commit-ai (auto-detected if running on port 8080)
git-commit-aiOr explicitly use llamacpp backend
git-commit-ai --backend llamacppConfigure as default backend
git-commit-ai config --set backend=llamacpp
`Run llama-server as a service
macOS (launchd)
`bash
Create launchd service
cat > ~/Library/LaunchAgents/com.llamacpp.server.plist << 'EOF'
Label
com.llamacpp.server
ProgramArguments
/opt/homebrew/bin/llama-server
-hf
Qwen/Qwen2.5-Coder-1.5B-Instruct-GGUF
-ngl
99
--port
8080
RunAtLoad
KeepAlive
StandardOutPath
/tmp/llama-server.log
StandardErrorPath
/tmp/llama-server.err
EOFStart the service
launchctl load ~/Library/LaunchAgents/com.llamacpp.server.plistStop the service
launchctl unload ~/Library/LaunchAgents/com.llamacpp.server.plistCheck logs
tail -f /tmp/llama-server.log
`
Linux (systemd)
`bash
Create systemd service
sudo cat > /etc/systemd/system/llama-server.service << 'EOF'
[Unit]
Description=llama.cpp Server
After=network.target[Service]
Type=simple
User=$USER
ExecStart=/usr/local/bin/llama-server -hf Qwen/Qwen2.5-Coder-1.5B-Instruct-GGUF -ngl 99 --port 8080
Restart=on-failure
RestartSec=10
StandardOutput=append:/var/log/llama-server.log
StandardError=append:/var/log/llama-server.err
[Install]
WantedBy=multi-user.target
EOF
Replace $USER with your username
sudo sed -i "s/\$USER/$USER/" /etc/systemd/system/llama-server.serviceEnable and start the service
sudo systemctl daemon-reload
sudo systemctl enable llama-server
sudo systemctl start llama-serverCheck status
sudo systemctl status llama-serverView logs
journalctl -u llama-server -f
`
Windows (Task Scheduler)
Option 1: PowerShell script with Task Scheduler
1. Create a startup script
C:\llama-server\start-llama.ps1:
`powershell
start-llama.ps1
Start-Process -FilePath "C:\llama-server\llama-server.exe"
-WindowStyle Hidden
-RedirectStandardError "C:\llama-server\llama-server.err"
`2. Create a scheduled task (run in PowerShell as Administrator):
`powershell
$action = New-ScheduledTaskAction -Execute "powershell.exe" Option 2: Using NSSM (Non-Sucking Service Manager)
`powershell
Install NSSM (using chocolatey)
choco install nssmInstall llama-server as a Windows service
nssm install LlamaServer "C:\llama-server\llama-server.exe" "-hf Qwen/Qwen2.5-Coder-1.5B-Instruct-GGUF -ngl 99 --port 8080"
nssm set LlamaServer AppDirectory "C:\llama-server"
nssm set LlamaServer AppStdout "C:\llama-server\llama-server.log"
nssm set LlamaServer AppStderr "C:\llama-server\llama-server.err"Start the service
nssm start LlamaServerStop the service
nssm stop LlamaServerRemove the service
nssm remove LlamaServer confirm
`OpenAI
`bash
export OPENAI_API_KEY="your-api-key"
`OpenAI-Compatible APIs
Any OpenAI-compatible API can be used by setting
OPENAI_BASE_URL:
`bash
Local server (llama.cpp, vLLM, etc.)
export OPENAI_BASE_URL="http://localhost:8080/v1"Or other providers (Together AI, Anyscale, etc.)
export OPENAI_BASE_URL="https://api.together.xyz/v1"
export OPENAI_API_KEY="your-api-key"
`Anthropic (Claude)
`bash
export ANTHROPIC_API_KEY="your-api-key"
`Groq (Fast & Free tier)
`bash
export GROQ_API_KEY="your-api-key"
`Usage
`bash
Basic: stage files + generate + confirm + commit
git add file1.ts file2.ts
git-commit-aiStage all changes and commit
git-commit-ai --allAuto-commit without confirmation
git add .
git-commit-ai -yCommit and push in one command
git add .
git-commit-ai --pushStage all changes and commit (equivalent to git add . && git-commit-ai)
git-commit-ai --allCommit each modified file separately
git-commit-ai --individualPreview message without committing (dry run)
git add .
git-commit-ai --dry-runAmend the last commit with a new message
git-commit-ai --amendForce a specific scope and type
git-commit-ai --scope auth --type fixGenerate message in a specific language
git-commit-ai --lang ptReference an issue
git-commit-ai --issue 123Mark as breaking change
git-commit-ai --breakingAdd co-authors
git-commit-ai --co-author "Jane Doe "Provide additional context
git-commit-ai --context "This fixes the login bug reported by QA"Use a specific backend
git-commit-ai --backend llamacpp
git-commit-ai --backend openai
git-commit-ai --backend anthropic
git-commit-ai --backend groqOverride model
git-commit-ai --model gpt-4o
git-commit-ai --model claude-3-sonnet-20240229Adjust creativity (temperature)
git-commit-ai --temperature 0.3Preview changes before committing
git add .
git-commit-ai summarizeEnable debug output for troubleshooting
git-commit-ai --debugShow current config
git-commit-ai configSet a config value
git-commit-ai config --set backend=llamacpp
git-commit-ai config --set model=gpt-4o
git-commit-ai config --set temperature=0.5Use short aliases
git-commit-ai config --set lang=pt # ā default_language
git-commit-ai config --set scope=api # ā default_scope
git-commit-ai config --set type=feat # ā default_type
git-commit-ai config --set temp=0.5 # ā temperatureList valid config keys and aliases
git-commit-ai config --list-keysCreate/edit config file manually
git-commit-ai config --edit
`Git Hook (Auto-generate on commit)
Install a git hook to automatically generate commit messages:
`bash
Install the hook
git-commit-ai hook --installNow just use git commit normally!
git add .
git commit
Message is auto-generated and opens in your editor
Check hook status
git-commit-ai hook --statusRemove the hook
git-commit-ai hook --remove
`Interactive Flow
`
š Generated commit message feat(auth): add login validation
[C]onfirm [E]dit [R]egenerate [A]bort? _
`Configuration
$3
Location:
~/.config/git-commit-ai/config.toml`toml
Backend: ollama, llamacpp, openai, anthropic, groq
backend = "ollama"
model = "llama3.1:8b"
ollama_url = "http://localhost:11434"
temperature = 0.7
retry_temperatures = [0.5, 0.3, 0.2]OpenAI Base URL - change this to use OpenAI-compatible APIs
Examples:
- Default OpenAI: https://api.openai.com/v1
- llama.cpp: http://localhost:8080/v1
- Together AI: https://api.together.xyz/v1
openai_base_url = "https://api.openai.com/v1"Optional: Ignore files from diff analysis
ignore_patterns = [".lock", "package-lock.json", ".min.js"]Optional: Set defaults for commit messages
default_scope = "api" # Default scope if not specified
default_type = "feat" # Default commit type
default_language = "en" # Default language (en, pt, es, fr, de)
`$3
Create
.gitcommitai or .gitcommitai.toml in your project root to override global settings:`toml
.gitcommitai
default_scope = "frontend"
default_language = "pt"
ignore_patterns = ["dist/", ".generated.ts"]
`$3
| Backend | Default Model |
|---------|---------------|
| ollama | llama3.1:8b |
| llamacpp | gpt-4o-mini (alias) |
| openai | gpt-4o-mini |
| anthropic | claude-3-haiku-20240307 |
| groq | llama-3.1-8b-instant |
CLI Options
| Option | Description |
|--------|-------------|
|
-a, --all | Stage all changes before committing |
| -p, --push | Push after commit |
| -y, --yes | Skip confirmation |
| -i, --individual | Commit files individually |
| -d, --debug | Enable debug output |
| --dry-run | Show message without committing |
| --amend | Regenerate and amend the last commit |
| -b, --backend | Backend to use |
| -m, --model | Override model |
| -t, --temperature | Override temperature (0.0-1.0) |
| -s, --scope | Force a specific scope (e.g., auth, api) |
| --type | Force commit type (feat, fix, docs, etc.) |
| -c, --context | Provide additional context for generation |
| -l, --lang | Language for message (en, pt, es, fr, de) |
| --issue | Reference an issue (e.g., 123 or #123) |
| --breaking | Mark as breaking change (adds ! to type) |
| --co-author | Add co-author (can be repeated) |Config Commands
| Command | Description |
|---------|-------------|
|
config | Show current configuration |
| config --edit | Create/edit config file manually |
| config --set | Set a config value |
| config --list-keys | List all valid config keys |Commit Types (Conventional Commits)
| Type | Description |
|------|-------------|
|
feat | New feature |
| fix | Bug fix |
| docs | Documentation |
| style | Formatting (no code change) |
| refactor | Code restructuring |
| test | Adding tests |
| build | Build system or dependencies |
| chore | Maintenance tasks |Environment Variables
| Variable | Description |
|----------|-------------|
|
OPENAI_API_KEY | OpenAI API key |
| OPENAI_BASE_URL | OpenAI-compatible API base URL (default: https://api.openai.com/v1) |
| ANTHROPIC_API_KEY | Anthropic API key |
| GROQ_API_KEY` | Groq API key |License
MIT