Cross-platform CLI launcher for LLM agents (Aider) with Ollama and OpenRouter support
npm install llm-launcherCross-platform CLI launcher for LLM agents (Aider) with support for Ollama (local) and OpenRouter (cloud) models.
- š Cross-platform (Windows, macOS, Linux)
- š¤ Support for Ollama local models
- āļø Support for OpenRouter cloud models
- šØ Interactive model selection with nice CLI interface
- āļø Automatic dependency checking
- š Environment variable support via .env files
- š¦ TypeScript-based for type safety
- Node.js >= 16
- Aider installed (pip install aider-chat)
- Ollama (optional, for local models)
- OpenRouter API key (optional, for cloud models)
``bash`
npm install -g llm-launcher
Then run from anywhere:
`bash`
llmor
llm-launcher
`bash`
git clone
cd llm-launcher
npm install
npm run build
npm link
`bash`
npx llm-launcher
Simply run the command and follow the interactive prompts:
`bash`
llm
The launcher will:
1. Check if Aider is installed
2. Prompt you to select a provider (Ollama or OpenRouter)
3. Show available models and let you select one
4. Launch Aider with your selected model
1. Select "Ollama (Local models)" from the provider menu
2. The launcher will automatically start Ollama if it's not running
3. Select from your locally pulled models
4. Aider will launch with the selected model
First-time Ollama users:
`bash`Pull a model first
ollama pull llama3.3or
ollama pull qwen2.5-coderor
ollama pull deepseek-coder
1. Select "OpenRouter (Cloud API)" from the provider menu
2. Enter your OpenRouter API key (or set via environment variable)
3. Select from popular models or enter a custom model name
4. Aider will launch with the selected model
Get an OpenRouter API key: https://openrouter.ai
Create a .env file in your project directory or set environment variables:
`bashOpenRouter API key
OPENROUTER_API_KEY=your_api_key_here
$3
#### Ollama
Any model you've pulled locally via
ollama pull #### OpenRouter
- Anthropic Claude (3.5 Sonnet, 3 Opus, 3 Sonnet, 3 Haiku)
- OpenAI GPT (GPT-4 Turbo, GPT-4, GPT-3.5 Turbo)
- Google Gemini (Pro 1.5, Pro)
- Meta Llama (3.1 70B, 3.1 8B)
- Mistral AI (Large, Medium)
- Qwen (2.5 72B)
- Custom models (enter any OpenRouter model identifier)
Development
$3
`bash
npm run build
`$3
`bash
npm run watch
`$3
`bash
npm run dev
`Publishing to npm
$3
1. Create an npm account at https://www.npmjs.com/signup
2. Login to npm from the command line:
`bash
npm login
`$3
1. Update the version in
package.json (or use npm version):`bash
Patch release (0.1.0 -> 0.1.1)
npm version patchMinor release (0.1.0 -> 0.2.0)
npm version minorMajor release (0.1.0 -> 1.0.0)
npm version major
`2. Publish to npm:
`bash
npm publish
`The
prepare script automatically builds the project before publishing.$3
`bash
npm version prerelease --preid=beta
npm publish --tag beta
`Users can install beta versions with:
`bash
npm install -g llm-launcher@beta
`Architecture
`
src/
āāā index.ts # Main entry point and provider selection
āāā types.ts # TypeScript type definitions
āāā utils.ts # Utility functions (process checking, command execution)
āāā ollama.ts # Ollama provider implementation
āāā openrouter.ts # OpenRouter provider implementation
`Why TypeScript?
- Type safety catches errors at compile time
- Better IDE support and autocomplete
- Self-documenting code with interfaces
- Easier refactoring and maintenance
- Still compiles to standard JavaScript
Troubleshooting
$3
Install Aider:
`bash
pip install aider-chat
`$3
Download and install Ollama from: https://ollama.ai
$3
Pull a model first:
`bash
ollama pull llama3.3
``Make sure you're running in PowerShell or Command Prompt with proper permissions.
MIT
Thomas Powell
Contributions welcome! Please feel free to submit issues or pull requests.