A wrapper for the OpenAI Assistants API with multi-user support and cost optimization
npm install assistinoA TypeScript wrapper for the OpenAI Assistants API with multi-user support and cost optimization.
``bash`
npm install assistino
`typescript
import Assistant from 'assistino';
const apiKey = 'YOUR_API_KEY_HERE';
const assistantId = 'YOUR_ASSISTANT_ID_HERE';
const assist = new Assistant(apiKey, assistantId, 'gpt-3.5-turbo', {
maxMessagesPerThread: 20,
inactivityTimeout: 60 60 1000 // 1 hour
});
async function chatExample() {
try {
const userId = 'user123';
const response = await assist.chat("Hello, how are you?", userId);
console.log("Assistant's response:", response);
// Cleanup inactive threads (call this periodically)
// await assist.cleanupInactiveThreads();
} catch (error) {
console.error("An error occurred:", error);
}
}
chatExample();
`Configuration
When creating an instance of the Assistant class, you can provide the following options:
- apiKey: Your OpenAI API keyassistantId
- : The ID of your OpenAI Assistantmodel
- (optional): The GPT model to use (default: 'gpt-3.5-turbo')options
- (optional): An object with the following properties:maxMessagesPerThread
- : Maximum number of messages to retain per threadinactivityTimeout
- : Time in milliseconds after which an inactive thread is cleaned up
- chat(message, userId): Send a message and get a responseremoveUserThread(userId)
- : Manually remove a user's conversation threadcleanupInactiveThreads()
- : Remove all inactive threads
- Call cleanupInactiveThreads() periodically to manage long-running applicationsmaxMessagesPerThread
- Adjust and inactivityTimeout` based on your specific use case and budget constraints
Contributions are welcome!