[](https://github.com/Galvanized-Pukeko/gaunt-sloth-assistant/actions/workflows/unit-tests.yml) [;
- You can ask GSloth to review your own code before committing (git --no-pager diff | gsloth review).
- Reviews Pull Requests (PRs) (gsloth pr 42);
- Fetches descriptions (requirements) from Github issue or Jira (gsloth pr 42 12);;
- Answers questions about provided code;
- Writes code;
- Connects to MCP server (including remote MCP with OAuth);
- Executes custom shell commands (deployments, migrations, tests, etc.) with security validation;
- Saves all responses in timestamped .md files (override with -w/--write-output-to-file);
- Anything else you need, when combined with other command line tools.
- OpenRouter
- Groq;
- DeepSeek;
- Google AI Studio and Google Vertex AI;
- Anthropic;
- OpenAI (and other providers using OpenAI format, such as Inception);
- Local AI: LM Studio, Ollama, llama.cpp (Via OpenAI compatibitlity)
- Ollama with JS config (some of the models, see https://github.com/Galvanized-Pukeko/gaunt-sloth-assistant/discussions/107)
- xAI;
* Any other provider supported by LangChain.JS should also work with JS config.
gth and gsloth commands are used interchangeably, both gsloth pr 42 and gth pr 42 do the same thing.
For detailed information about all commands, see docs/COMMANDS.md.
These apply to every command:
- --config – load a specific config file without moving directories
- -i, --identity-profile – switch to another profile under .gsloth/.gsloth-settings/
- -w, --write-output-to-file – control response files (true by default, use -wn/-w0 for false, or pass a filename)
- --verbose – enable verbose LangChain/LangGraph logs (useful when debugging prompts)
- init - Initialize Gaunt Sloth in your project with a specific AI provider
- pr - ⚠️ This feature requires GitHub CLI to be installed. Review pull requests with optional requirement integration (GitHub issues or Jira).
- review - Review any diff or content from various sources
- ask - Ask questions about code or programming topics
- chat - Start an interactive chat session
- code - Write code interactively with full project context
Initialize project:
``bash`
gsloth init anthropic
Review PR with requirements:
`bash`
gsloth pr 42 23 # Review PR #42 with GitHub issue #23
Review local changes:
`bash`
git --no-pager diff | gsloth review
Review changes between a specific tag and the HEAD:
`bash`
git --no-pager diff v0.8.3..HEAD | gth review
**Review diff between head and previous release and head using a specific requirements provider (GitHub issue 38), not the one which is configured by default:
`bash`
git --no-pager diff v0.8.10 HEAD | npx gth review --requirements-provider github -r 38
Ask questions:
`bash`
gsloth ask "What does this function do?" -f utils.js
Write release notes:
`bash`
git --no-pager diff v0.8.3..HEAD | gth ask "inspect existing release notes in assets/release-notes/v0_8_2.md; inspect provided diff and write release notes to v0_8_4.md"
To write this to filesystem, you'd need to add filesystem access to the ask command in .gsloth.config.json.
`json`
{"llm": {"type": "vertexai", "model": "gemini-2.5-pro"}, "commands": {"ask": {"filesystem": "all"}}}
*You can improve this significantly by modifying project guidelines in .gsloth.guidelines.md or maybe with keeping instructions in file and feeding it in with -f.
Interactive sessions:
`bash`
gsloth chat # Start chat session
gsloth code # Start coding sessiongsloth
Running with no subcommand also drops you into chat.
Tested with Node 22 LTS.
bash
npm install gaunt-sloth-assistant -g
`Configuration
> Gaunt Sloth currently only functions from the directory which has a configuration file (
.gsloth.config.js, .gsloth.config.json, or .gsloth.config.mjs) and .gsloth.guidelines.md. Configuration files can be located in the project root or in the .gsloth/.gsloth-settings/ directory.
>
> You can also specify a path to a configuration file directly using the -c or --config global flag, for example gth -c /path/to/your/config.json ask "who are you?"
> Note, however, is that project guidelines are going to be used from current directory if they exist and simple install dir prompt is going to be used if nothing found.Configuration can be created with
gsloth init [vendor] command.
Currently, openrouter, anthropic, groq, deepseek, openai, google-genai, vertexai and xai can be configured with gsloth init [vendor].
For OpenAI-compatible providers like Inception, use gsloth init openai and modify the configuration.More detailed information on configuration can be found in CONFIGURATION.md
Gaunt Sloth also supports
.aiignore for excluding files from filesystem tools, with overrides via config.$3
Gaunt Sloth supports defining custom shell commands that the AI can execute. These custom tools:
- Work across all commands (
pr, review, code, ask, chat)
- Can be configured globally or per-command
- Support parameters with security validation
- Are useful for deployments, migrations, automation, and moreExample configuration:
`json
{
"llm": {"type": "vertexai", "model": "gemini-2.5-pro"},
"customTools": {
"deploy": {
"command": "npm run deploy",
"description": "Deploy the application"
},
"run_migration": {
"command": "npm run migrate -- ${name}",
"description": "Run a database migration",
"parameters": {
"name": {"description": "Migration name"}
}
}
}
}
`See Custom Tools Configuration for complete documentation.
$3
`bash
cd ./your-project
gsloth init google-genai
`
Make sure you either define GOOGLE_API_KEY environment variable or edit your configuration file and set up your key.
It is recommended to obtain API key from Google AI Studio official website rather than from a reseller.$3
`bash
cd ./your-project
gsloth init vertexai
gcloud auth login
gcloud auth application-default login
`As of 19 Nov 2025, Gemini 3 on Vertex AI works with
global and us-central1 locations when using the default aiplatform.googleapis.com endpoint.
However, regional endpoints (e.g., us-central-aiplatform.googleapis.com) currently return 404 for Gemini 3.
Example config:
`json
{
"llm": {
"type": "vertexai",
"model": "gemini-3-pro-preview",
"location": "global"
}
}
`$3
`bash
cd ./your-project
gsloth init openrouter
`Make sure you either define
OPEN_ROUTER_API_KEY environment variable or edit your configuration file and set up your key.$3
`bash
cd ./your-project
gsloth init anthropic
`Make sure you either define
ANTHROPIC_API_KEY environment variable or edit your configuration file and set up your key.$3
`bash
cd ./your-project
gsloth init groq
`
Make sure you either define GROQ_API_KEY environment variable or edit your configuration file and set up your key.$3
`bash
cd ./your-project
gsloth init deepseek
`
Make sure you either define DEEPSEEK_API_KEY environment variable or edit your configuration file and set up your key.
It is recommended to obtain API key from DeepSeek official website rather than from a reseller.$3
`bash
cd ./your-project
gsloth init openai
`
Make sure you either define OPENAI_API_KEY environment variable or edit your configuration file and set up your key.$3
LM Studio provides a local OpenAI-compatible server for running models on your machine:
`bash
cd ./your-project
gsloth init openai
`
Then edit your configuration file to point to LM Studio (default: http://127.0.0.1:1234/v1).
Use any string for the API key (e.g., "none") - LM Studio doesn't validate it.Important: The model must support tool calling. Tested models include gpt-oss, granite, nemotron, seed, and qwen3.
See CONFIGURATION.md for detailed setup.
$3
For providers using OpenAI-compatible APIs:
`bash
cd ./your-project
gsloth init openai
`
Then edit your configuration to add custom base URL and API key. See CONFIGURATION.md for examples.$3
`bash
cd ./your-project
gsloth init xai
`
Make sure you either define XAI_API_KEY environment variable or edit your configuration file and set up your key.$3
Any other AI provider supported by Langchain.js can be configured with js Config.
For example, Ollama can be set up with JS config (some of the models, see https://github.com/Galvanized-Pukeko/gaunt-sloth-assistant/discussions/107)$3
JavaScript configs enable advanced customization including custom middleware and tools that aren't available in JSON configs. See the JavaScript config example for a complete demonstration of creating custom logging middleware and custom tools.Integration with GitHub Workflows / Actions
Example GitHub workflows integration can be found in .github/workflows/review.yml;
this example workflow performs AI review on any pushes to Pull Request, resulting in a comment left by,
GitHub actions bot.
MCP (Model Context Protocol) Servers
Gaunt Sloth supports connecting to MCP servers, including those requiring OAuth authentication.
This has been tested with the Atlassian Jira MCP server.
See the MCP configuration section for detailed setup instructions, or the Jira MCP example for a working configuration.
If you experience issues with the MCP auth try finding
.gsloth dir in your home directory,
and delete JSON file matching the server you are trying to connect to,
for example for atlassian MCP the file would be ~/.gsloth/.gsloth-auth/mcp.atlassian.com_v1_sse.jsonA2A (Agent-to-Agent) Protocol Support (Experimental)
Gaunt Sloth supports the A2A protocol for connecting to external AI agents. See CONFIGURATION.md for setup instructions.
Uninstall
Uninstall global NPM package:
`bash
npm uninstall -g gaunt-sloth-assistant
`Remove global config (if any)
`bash
rm -r ~/.gsloth
`Remove configs from project (if necessary)
`bash
rm -r ./.gsloth*
``