A Node.js package for interacting with Ollama and Baichuan AI models with flexible API and CLI support
npm install q-ollamabash
npm install q-ollama
`
Quick Start
$3
Make sure Ollama service is running (default: http://localhost:11434)
`javascript
const { QOllama, ProviderType } = require('q-ollama');
// Create instance
const qollama = new QOllama({
provider: ProviderType.OLLAMA,
ollamaBaseUrl: 'http://localhost:11434',
defaultModel: 'qwen3:8b',
debug: true
});
// Quick chat
async function chat() {
const response = await qollama.quickChat('Hello, please introduce yourself');
console.log('AI Response:', response.content);
}
chat();
`
$3
Set environment variable BAICHUAN_API_KEY or provide API key directly:
`javascript
const { QOllama, ProviderType } = require('q-ollama');
const qollama = new QOllama({
provider: ProviderType.BAICHUAN,
baichuanApiKey: 'your-api-key-here', // or use environment variable
defaultModel: 'Baichuan2-Turbo'
});
async function chat() {
const response = await qollama.quickChat('Hello');
console.log('Baichuan Response:', response.content);
}
chat();
`
$3
`javascript
// Start with Ollama
const qollama = new QOllama({
provider: ProviderType.OLLAMA,
defaultModel: 'qwen3:8b'
});
console.log('Current provider:', qollama.getCurrentProvider()); // ollama
// Switch to Baichuan
qollama.switchProvider({
provider: ProviderType.BAICHUAN,
baichuanApiKey: process.env.BAICHUAN_API_KEY
});
console.log('Switched provider:', qollama.getCurrentProvider()); // baichuan
`
API Reference
$3
#### Constructor
`typescript
new QOllama(config: QOllamaConfig)
`
Configuration options:
`typescript
interface QOllamaConfig {
provider: ProviderType; // Model provider
ollamaBaseUrl?: string; // Ollama service URL
baichuanApiKey?: string; // Baichuan API key
defaultModel?: string; // Default model
debug?: boolean; // Debug mode
}
`
#### Methods
- chat(messages: ChatMessage[], options?: ChatOptions): Promise - Send chat messages
- quickChat(prompt: string, options?: ChatOptions): Promise - Quick single message
- switchProvider(newConfig: QOllamaConfig): void - Switch model provider
- getCurrentProvider(): string - Get current provider
- supportsStreaming(): boolean - Check if streaming is supported
- listModels(): Promise - List available models
- setDebug(debug: boolean): void - Set debug mode
$3
`javascript
const { createQOllama, createOllamaProvider, createBaichuanProvider } = require('q-ollama');
// Quick instance creation
const qollama1 = createQOllama(config);
const qollama2 = createOllamaProvider('http://localhost:11434', true);
const qollama3 = createBaichuanProvider('your-api-key', true);
`
Command Line Tool
After installation, use the q-ollama command:
$3
`bash
Using Ollama
q-ollama chat --provider ollama --model qwen3:8b
Using Baichuan
q-ollama chat --provider baichuan --model Baichuan2-Turbo --key YOUR_API_KEY
`
$3
`bash
q-ollama message "Hello world" --provider ollama --model qwen3:8b
`
$3
`bash
q-ollama list-models --provider ollama
`
$3
`bash
q-ollama --help
`
Debug Mode
Enable debug mode to see detailed request and response information:
`javascript
const qollama = new QOllama({
provider: ProviderType.OLLAMA,
debug: true // Enable debug
});
// Or enable at runtime
qollama.setDebug(true);
`
Debug output includes:
- Method call parameters
- API request details
- Response data
- Error information
Environment Variables
- BAICHUAN_API_KEY - Baichuan model API key
Development
$3
`bash
npm run build
`
$3
`bash
npm test
`
$3
`bash
npm run dev
`
Examples
Check the examples/ directory for complete examples:
`bash
node examples/basic-usage.js
``