AI-powered git commit message generator
npm install specteraiSpecterAi is an intelligent, AI-powered git commit message generator. It analyzes your staged changes and uses Ollama (running locally) to generate Conventional Commit messages.
> Why SpecterAi?
>
> * Privacy Focused: Your code never leaves your machine.
> * Fast: Zero network latency for API calls.
> * Smart: Infers scopes, summarizes changes, and follows conventional standards.
---
SpecterAi relies on Ollama to run Large Language Models (LLMs) or Small Language Models (SLMs) locally.
1. Download Ollama: Visit ollama.ai and download the version for your OS.
2. Pull a Model: Open your terminal and pull a lightweight but capable model like llama3.2 or mistral. Recommended model is llama3.2.
``bash`
ollama pull llama3.2
`
3. Verify: Ensure Ollama is running.
bash`
ollama list
# You should see 'llama3.2' in the list
You can install SpecterAi globally using npm, bun, or yarn (npx is also supported).
`bashUsing npm
npm install -g specterai
---
⚡ Efficient Usage
$3
For the most efficient workflow, let SpecterAi handle your commits automatically.
1. Install the hook (inside any git repository):
`bash
specterai install-hook
`2. Commit as usual:
`bash
git add .
git commit
`SpecterAi will intercept the commit, analyze your changes, and pre-fill your commit message editor with a generated suggestion. You can just save and close to accept, or edit it if needed.
$3
If you prefer to generate messages on demand:
`bash
git add .
specterai
`You will enter an interactive mode:
`text
Proposed Commit Message:
------------------------
feat(auth): keycloak integration for sso
------------------------[C]ommit, [E]dit, [R]egenerate, [Q]uit?
`* [C]ommit: Approve and commit immediately.
* [E]dit: Open your default
$EDITOR to tweak the message.
* [R]egenerate: Ask AI to try again (maybe it missed a detail).
* [Q]uit: Exit without committing.---
🛠 Configuration
Customize SpecterAi to fit your workflow by creating a
.specterrc.json file in your home directory (~/.specterrc.json) or project root.Example Config:
`json
{
"model": "llama3.2",
"maxLength": 50,
"maxDiffLength": 4000,
"generateScopedCommit": true
}
`*
model: The Ollama model to use (default: llama3.2).
* maxLength: Max characters for the commit subject line.
* maxDiffLength: Truncate very large diffs to avoid context limit errors.
* generateScopedCommit: Try to infer scopes like feat(ui): instead of just feat:.View your current effective config:
`bash
specterai config show
`---
❓ FAQ
Q: I get "Connection refused" errors.
A: Make sure Ollama is running (
ollama serve or check your menu bar).Q: The generated messages are tailored to the wrong context.
A: Try a larger model like
llama3.2 or mistral if your machine can handle it. config it in .specterrc.json`.---
MIT