npm install banjinBanjin is a powerful, extensible AI command-line assistant designed for developers, system administrators, and power users. It acts as an intelligent agent that can operate on your local machine or connect to remote servers via SSH, allowing you to perform complex tasks using natural language.
> Banjin was crafted with the help of an AI assistant, making it an application by AI, for AI (and the humans who command them). π
Think of it as a junior developer or sysadmin you can chat with, capable of executing commands, managing files, and integrating with external services, all while asking for your approval before taking any action.
Banjin is currently in active development. While we strive for stability and security, this software is provided "as is" without any warranties. We are not responsible for any data loss, system damage, security breaches, or other issues that may arise from using this application.
Key risks to consider:
- Data Loss: Backup your important files before using Banjin, especially when working with remote servers or file operations.
- Security: Banjin may execute commands on your behalf. Always review actions before approving them.
- API Costs: Using LLMs can incur costs depending on your provider. Monitor your usage.
- Experimental Features: Some features are still evolving and may change or have bugs.
- No Liability: The developers and contributors are not liable for any damages or losses incurred through the use of this software.
Use at your own risk and always have backups of critical systems.
- Remote Operations via SSH βοΈ: Securely connect to any server and instruct the AI to perform tasks, manage files, or run diagnostics directly on the remote machine.
- Intelligent Tool-Based Agent π§ : Banjin uses a Large Language Model (LLMs) that can reason and decide which tools to use to accomplish your goals.
- Interactive Confirmation β
: For safety, Banjin will always show you the exact command or action it intends to perform and ask for your explicit approval before execution.
- Extensible with MCP Tools π§: The "Model Context Protocol" (MCP) tool system allows you to extend Banjin's capabilities.
- Context-Aware π: Provide the AI with custom instructions and context through .md files, tailoring its behavior and knowledge to your specific project or environment.
- Session Management πΎ: Save, load, and reset conversations to manage different tasks and contexts efficiently.
- Input History π: Navigate through previous inputs using arrow keys (up/down) in line input mode. History is session-based and resets when you restart Banjin.
- File Transfer π: Upload and download files securely between local machine and remote servers using /upload and /download commands.
- Real-time Monitoring π: Watch commands execute repeatedly with /watch or monitor log files in real-time with /tail.
- Container Management π³: Full Docker container management with /docker command supporting ps, logs, exec, start, stop, and more.
- Database Backup πΎ: Automated backups for MySQL, PostgreSQL, and MongoDB databases with /db-backup.
- Self-Updating π: Use the /update command to easily keep Banjin at the latest version.
Prerequisites: Node.js 20 or higher is required.
``bash`
npm install -g banjin
On the first run, Banjin will guide you through creating a global configuration directory at ~/.banjin.
This directory will contain:
- config.yaml: The main configuration file. You must edit this file to add your LLMs API key.mcp-servers.json
- : Configuration for your custom MCP tools.context.md
- : A file for your global system context and instructions for the AI.ssh-servers.json
- : A file to store your SSH server aliases and connection details.
Security Note: π‘οΈ Your config.yaml contains sensitive API keys. It is highly recommended to secure this file by setting its permissions to be readable only by you (e.g., chmod 600 ~/.banjin/config.yaml).
Banjin is built to work with any OpenAI-compatible API endpoint that supports tool calling (function calling). It auto-detects the provider from your base URL and applies provider-specific configuration automatically.
Tested & Recommended:
| Provider | Base URL | Model Format | Notes |
|----------|----------|--------------|-------|
| Groq β‘ | https://api.groq.com/openai/v1 | llama-3.1-8b-instant | Fastest, recommended for tool use |https://openrouter.ai/api/v1
| OpenRouter π | | provider/model (e.g., meta-llama/llama-3.1-8b-instruct) | Multi-model aggregator, requires HTTP-Referer header (auto-added) |https://api.together.ai/v1
| Together.AI βοΈ | | meta-llama/Llama-3-70b-instruct | Fast open-source models |https://api-inference.huggingface.co/v1
| Hugging Face π€ | | Standard HF model IDs | Free tier available |http://localhost:8000/v1
| Generic OpenAI-Compatible | (e.g., local) | Your model's format | Self-hosted, local LLM servers, vLLM, etc. |
Fastest inference engine with excellent tool support. Get API key from groq.com.
Models with tool support:
- llama-3.1-8b-instant (fast, good for tool use)llama-3.3-70b-versatile
- (powerful, supports complex tool chains)
Multi-model platform with access to hundreds of models including GPT-4, Claude, Llama, and more.
Setup:
1. Get API key from openrouter.ai
2. Model format: provider/model (e.g., meta-llama/llama-3.1-70b-instruct, openai/gpt-4o)HTTP-Referer
3. Note: Banjin automatically adds required header for OpenRouter
Popular models on OpenRouter:
- openai/gpt-4o - Most capablemeta-llama/llama-3.1-70b-instruct
- - Fast, open-sourceanthropic/claude-3.5-sonnet
- - Excellent reasoningopenrouter/auto
- - Route to best available model
Fast inference for open-source models.
Setup:
1. Get API key from together.ai
2. Standard OpenAI-compatible format
Use with local LLM servers like vLLM, Ollama, LM Studio, etc.
`bash`Example: vLLM server running locally
baseUrl: "http://localhost:8000/v1"
model: "meta-llama/Llama-2-7b-chat-hf"
Banjin automatically detects your provider from baseUrl and applies provider-specific configuration:
- Groq β Standard OpenAI headers
- OpenRouter β Adds HTTP-Referer header (required) + X-Title header
- Together.AI β Standard OpenAI headers
- Generic β Standard OpenAI headers
No manual configuration needed! Just set your baseUrl and apiKey in config.yaml.
Edit ~/.banjin/config.yaml:
`yaml`
llm:
# Use any supported provider's base URL
baseUrl: "https://api.groq.com/openai/v1" # or openrouter.ai, together.ai, etc.
# Model name (format depends on provider)
model: "llama-3.1-8b-instant"
# Your API key
apiKey: "YOUR_API_KEY_HERE"
# Temperature (0.0-2.0)
temperature: 0.5
See config.example.yaml for more examples of different providers.
---
Tool use (function calling) is required by Banjin. Below are models known to support tools:
Groq:
- llama-3.1-8b-instant β
llama-3.3-70b-versatile
- β
OpenRouter:
- Most models support tools, but some free models may have limitations
- Check openrouter.ai/docs for model capabilities
Together.AI:
- Most Llama 3 / 3.1 models support tools
- Check provider docs for latest supported models
Note: If your chosen model doesn't support tool calling, Banjin will fail with an error message. Switch to a model with tool support.
---
The "Multi-Custom Provider" (MCP) system is what makes Banjin truly powerful. It allows you to define custom tools that the AI can use. A tool can be a simple local command or a call to a web service.
For example, you could configure an MCP tool to:
- Search your company's internal documentation.
- Fetch the status of your CI/CD pipeline.
- Create a new ticket in your project management system.
You define these tools in mcp-servers.json. The AI will then be able to see these tools and use them when appropriate to answer your requests.
Banjin supports slash commands (e.g., /help) for direct instructions. You can also use a dot prefix (e.g., .help) to prevent the command from being sent to the LLMs.
Click to view all commands
Chat & Context:
/context - Display the current system context
/resetchat - Reset the current conversation memory
/savechat - Save the conversation to a file
/loadchat
/chats-list - List saved chat files
/chats-delete
LLMs & Model:
/model
/temp <0.0-2.0> - Change the LLMs temperature for this session
/model-reset - Reset model to the value from config file
/temp-reset - Reset temperature to the value from config file
Interface:
/mode
/output [markdown|text] [--save] - Show or set output format; use --save to persist to config
/output-reset - Reset output format to default from config
/timeout [seconds] [--save] - Show or set tool execution timeout (0=disabled); use --save to persist
/timeout-reset - Reset timeout to default from config
Connections & Files:
/status - Show current SSH connection status
/connect
/disconnect - Disconnect from the remote server
/ls-files [path] - List files and directories
/list-ssh - List all saved SSH server aliases
/add-ssh
/rm-ssh
/upload
/download
MCP Tools:
/mcp-list - List available MCP servers from config
/mcp-tools - List all discovered tools from loaded MCP servers
/mcp-reload - Reload the MCP servers configuration
General:
/exec
/help - Show this help message
/clear - Clear the screen
/update - Check for application updates
Monitoring:
/watch
/tail
Container Management:
/docker
Database Operations:
/db-backup
By default, Banjin displays responses as plain text for maximum compatibility.
- Session toggle:
- Use /output markdown to enable Markdown rendering for the current session/output text
- Use to switch back to plain text/output-reset
- Use to reset to your config default/output markdown --save
- Persist preference:
- Use (or --save with text) to write your preference to ~/.banjin/config.yamlcli.output_format
- The setting is stored at and can be "text" (default) or "markdown"
When Markdown is enabled, Banjin uses the marked + marked-terminal stack to render headings, lists, code blocks, tables, and links more readably in your terminal. If the renderer packages are unavailable for any reason, Banjin will gracefully fall back to plain text.
Banjin provides safety features for long-running or stuck tool executions:
- Cancel with ESC: During tool execution, press the ESC key to cancel the operation immediately.
- Automatic timeout: Control how long tools can run before timing out:
- Default: 300 seconds (5 minutes) - reasonable for most server admin tasks
- Runtime control:
- /timeout - Show current timeout setting/timeout 600
- - Set timeout to 10 minutes for current session/timeout 0
- - Disable timeout (infinite wait)/timeout 300 --save
- - Set and save to config permanently/timeout-reset
- - Reset to config defaultcli.tool_timeout
- Config file: Set in ~/.banjin/config.yaml
- Timeout is preserved across updates
When a tool times out or is cancelled, Banjin will notify the LLMs so it can adjust its approach or suggest alternatives.
Banjin includes powerful sysadmin commands for file management, monitoring, containers, and databases:
bash
Upload local file to remote server
/upload ./config.yaml /etc/myapp/config.yamlDownload remote file to local
/download /var/log/nginx/error.log ./nginx-errors.log
`$3
`bash
Watch system processes every 5 seconds
/watch "ps aux | grep nginx" 5Monitor log file in real-time (shows last 20 lines, then follows)
/tail /var/log/nginx/access.log 20Watch remote command
/watch "/exec systemctl status nginx" 10
`$3
`bash
List all containers
/docker psView container logs
/docker logs myappExecute command in running container
/docker exec myapp "ls -la /app"Start/stop containers
/docker start nginx
/docker stop nginx
/docker restart nginxRemove container
/docker rm old-containerPull and build images
/docker pull nginx:latest
/docker build . myapp:v1.0
`$3
`bash
MySQL backup
/db-backup mysql mydatabase root mypassword localhostPostgreSQL backup
/db-backup postgresql mydb postgres localhostMongoDB backup (creates compressed archive)
/db-backup mongodb mydb localhost 27017
`All commands work on both local and remote systems (when connected via SSH).
Security Considerations π
Important: These advanced commands have significant security implications. Always understand the risks before use.
$3
- β
Encrypted: Uses SCP over SSH for secure transfer
- β οΈ Path Risks: Avoid relative paths that could overwrite system files
- π‘οΈ Best Practice: Use absolute paths and verify destinations`bash
β
Safe usage
/upload ./config/app.yaml /home/user/config/app.yaml
/download /var/log/nginx/error.log ./server-logs.logβ Dangerous - avoid these patterns
/upload ../../../etc/passwd /tmp/backup # Path traversal
/download /etc/shadow ./passwords # Sensitive data
`$3
- β
Controlled: Manual refresh (Enter) and cancellation (Ctrl+C)
- β οΈ Resource Usage: Continuous monitoring can consume system resources
- β οΈ Data Exposure: Log monitoring may reveal sensitive information
- π‘οΈ Best Practice: Use reasonable intervals and monitor resource usage`bash
β
Safe monitoring
/watch "ps aux | head -10" 5
/tail /var/log/nginx/access.log 50β Resource intensive
/watch "find / -name '*.log' 2>/dev/null" 1
/tail /var/log/auth.log # May expose authentication data
`$3
- β
Isolated: Operations contained within Docker environment
- β οΈ Privilege Escalation: Containers with --privileged flag bypass isolation
- β οΈ Host Access: Mounted volumes can access host filesystem
- π‘οΈ Best Practice: Use non-root containers and verify image sources`bash
β
Safe operations
/docker ps
/docker logs myapp
/docker imagesβ οΈ High risk in privileged containers
/docker exec privileged-container "rm -rf /host/path"
`$3
- β
Encrypted Transfer: SSH encryption for remote backups
- β οΈ Credential Exposure: Passwords visible in command history
- β οΈ Large Data Sets: Backups can consume significant disk space
- β οΈ Sensitive Data: Backups contain potentially sensitive information
- π‘οΈ Best Practice: Use interactive password prompts, verify storage space`bash
β
Safe backup (password prompted interactively)
/db-backup mysql mydb root localhostβ Avoid visible passwords
/db-backup mysql mydb root mysecretpassword localhostβ
Check space before large backups
Run: df -h (check available space)
Run: ls -la ~/banjin-backups/ (check existing backups)
`$3
1. Test First: Always test commands with non-critical data
2. Verify Permissions: Ensure proper access rights before operations
3. Monitor Resources: Watch system resources during long-running commands
4. Clean Up: Remove temporary files and old backups after use
5. Use Secure Connections: Always use SSH/VPN for remote access
6. Audit Actions: Review logs after sensitive operationsRisk Level (1-10 scale):
-
/upload//download: 3/10 (Low with proper validation)
- /watch: 4/10 (Depends on monitored command)
- /tail: 5/10 (May expose sensitive logs)
- /docker: 6/10 (Depends on container privileges)
- /db-backup: 7/10 (Handles sensitive data)Development π¨βπ»
If you want to contribute to the project:
`bash
1. Clone the repository
git clone https://github.com/octadira/banjin-cli.git2. Navigate to the project directory
cd banjin-cli3. Install dependencies
npm install4. Run the app locally
npm start
`Server Profiling & Audit π
Banjin includes comprehensive server profiling and audit logging capabilities for sysadmins and developers. All data is stored locallyβno automatic upload or sharing.
$3
Banjin collects detailed server information including:
- Hardware: CPU, cores, RAM, disk usage, network interfaces
- OS & Kernel: Full OS details, kernel version, uptime, load average
- Services: Running processes, systemd services, failed services
- Security: Firewall status, SSH configuration, failed login attempts
- Network: Listening ports, open connections, routing information
- Performance: Live CPU/memory usage, process counts, disk I/O
- Audit Trail: Complete log of all actions performed
Collection takes ~5-10 seconds and provides sysadmin-grade context for LLM analysis.
$3
ServerProfile (comprehensive server profile):
`typescript
{
id: 'server-01',
collectedAt: '2025-10-17T19:50:00Z',
hardware: { cpu, cores, ram_gb, disk_gb, disks },
os: { name, version, kernel, arch },
users: [ { username, uid, shell, home } ],
services: [ { name, status, port } ],
network: {
hostname,
public_ip,
interfaces: [ { name, ip, mac } ],
listening_ports: [ { port, protocol, service } ],
open_connections: number
},
security: {
firewall_status: string,
firewall_enabled: boolean,
ssh_port: number,
ssh_root_login: boolean,
failed_services: string[]
},
performance: {
cpu_usage_percent: number,
memory_usage_percent: number,
memory_used_gb: number,
load_average: { one, five, fifteen },
process_count: number
},
kernel_info: {
kernel_version: string,
boot_time: string
},
recent_alerts: {
error_count_1h: number,
failed_services: string[],
failed_login_count_1h: number
},
tags: ['production'],
notes: 'Main web server'
}
`ActionLogEntry (audit log entry):
`typescript
{
timestamp: '2025-10-17T19:51:00Z',
user: 'adrian',
action: 'exec',
details: 'ssh server-01 "systemctl restart nginx"',
status: 'success',
error: undefined
}
`$3
Profile Commands:
-
/profile collect β Collect comprehensive server profile with hardware, OS, services, security, and performance data
- /profile show [hostname] β Display saved profile as JSON
- /profile summarize [hostname] β Brief summary with tags and notes
- /profile diff β Compare two profiles (stub)
- /profile send [--dry-run] β Send profile to external service (stub)Audit Commands:
-
/audit tail [--lines N] [--host hostname] β Show last N audit entries
- /audit show [--host hostname] β Show all audit entries
- /audit export --format json|csv [--host hostname] β Export as JSON or CSV
- /audit search β Search audit log (stub)Storage Commands:
-
/storage β Storage statistics and cleanup (stub)$3
`bash
Connect to a remote server
/connect myserverCollect comprehensive server profile (with security & performance analysis)
/profile collectView collected data
/profile showExport audit log
/audit export --format jsonAsk LLMs for analysis
"Analyze this server - suggest security improvements based on the full profile"
`$3
Banjin uses three intelligent tools for sysadmin recommendations:
1.
save_profile_notes β Auto-save observations to profile (no popup)
2. suggest_profile_update β Propose profile improvements with user confirmation (popup yes/no)
3. suggest_action_plan β Recommend step-by-step fixes with risk assessment (popup with steps)The LLMs can use these tools to propose improvements, but all actions require your explicit approval before execution.
All data is stored locally under
~/.banjin. Nothing is sent to any server unless you explicitly implement it.Limits and policies can be configured in
config.yaml (see cli.profile, cli.audit, cli.storage, cli.privacy`).---