A generic LLM connector for integrating Large Language Models (LLMs) in React ChatBotify!
npm install @rcb-plugins/llm-connector
bash
npm install @rcb-plugins/llm-connector
`
2. Import the plugin:
`javascript
import LlmConnector from "@rcb-plugins/llm-connector";
`
3. Initialize the plugin within the plugins prop of ChatBot:
`javascript
import ChatBot from "react-chatbotify";
import LlmConnector from "@rcb-plugins/llm-connector";
const MyComponent = () => {
return (
);
};
`
4. Define an llmConnector attribute within the Block that requires LLM integration. Import your desired LLM provider (or create your own!) and pass it as a value to the provider within the llmConnector attribute. You may refer to the setup below which uses the WebLlmProvider for a better idea (details covered later):
`javascript
import ChatBot from "react-chatbotify";
import LlmConnector, { LlmConnectorBlock, WebLlmProvider } from "@rcb-plugins/llm-connector";
const MyComponent = () => {
const flow = {
start: {
message: "What would you like to find out today?",
transition: 0,
path: "llm_example_block",
},
llm_example_block: {
llmConnector: {
provider: new WebLlmProvider({
model: 'Qwen2-0.5B-Instruct-q4f16_1-MLC',
}),
}
} as LlmConnectorBlock,
// ... other blocks as necessary
}
return (
);
};
`
The quickstart above shows how LLM integrations can be done within the llm_example_block, where we rely on a default WebLlmProvider with minimal configurations to perform inference in the browser. The full onfiguration guide for the default providers can be found here. For those who prefer more hands-on experimentation, the documentation website for the React ChatBotify Core Library also contains live examples for this plugin. You will find those examples under the LLM Providers section.
$3
LLM Connector is a lightweight plugin that provides the following features to your chatbot:
- Simple & Fast LLM Integrations (via common default providers)
- Configure output behavior (e.g. stream responses by character/chunk or show full text at once)
- Configure output speed
- Configure size of message history to include
- Configure default error messages if responses fail
- Synchronized audio output (relies on core library audio configurations to read out LLM responses)
- Built-in common providers for easy integrations (OpenAI, Gemini & WebLlm)
- Ease of building your own providers for niche or custom use cases
$3
#### Plugin Configuration
The LlmConnector plugin accepts a configuration object that allows you to customize its behavior and appearance. An example configuration is passed in below to initialize the plugin:
`javascript
import ChatBot from "react-chatbotify";
import LlmConnector from "@rcb-plugins/llm-connector";
const MyComponent = () => {
const pluginConfig = {
// defaults to true, auto enable events required for plugin to work
autoConfig: true,
}
return (
)
}
`
The base plugin configuration only allows a single field which is autoConfig (strongly recommended to keep this to true) which is described below:
| Configuration Option | Type | Default Value | Description |
|------------------------------|----------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------|
| autoConfig | boolean | true | Enables automatic configuration of required events. Recommended to keep as true. If set to false, you need to configure events manually. |
#### LLM Connector Attribute
The llmConnector attribute is added to the Block that you are keen to integrate LLM in. When you specify the llmConnector attribute, all default attributes specified in the block are ignored. This is because the LlmConnector plugin will take full control over the block to ensure a tight and smooth integration. With that said, the llmConnector attribute is an object that comes with its own properties as described below:
| Property | Type | Default Value | Description |
|------------------------------|----------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------|
| provider | Provider | null | The LLM Provider to use in this block. |
| outputType | string | chunk | Output type for the LLM response (character, chunk or full). If set to character or chunk, output will be streamed by character or chunk respectively. If set to full, then the output will be sent fully in one go. |
| outputSpeed | number | 30 | Output speed in milliseconds (applicable only if outputType is set to character or chunk). |
| historySize | number | 0 | Default number of messages from chat history to include when sending messages to LLMs. |
| initialMessage | string | "" | Initial message to send in the chat. |
| waitForUserInput | boolean | true | Whether to wait for user input before triggering the initial LLM prompt (if false, uses the user input from previous block). |
| errorMessage | string | Unable to get response, please try again. | Error message shown on failure to fetch a response. |
| stopConditions | object | null | An object containing possible stop conditions to end an LLM conversation (more information on stopConditions here). |
#### LLM Providers
As you may have seen from earlier examples, providers are passed into the provider property within the llmConnector attribute. Providers are essentially an abstraction over the various LLM providers such as OpenAI and Gemini. With that said, configurations for providers can vary greatly depending on the choice of provider. For the default providers, their configuration guides can be found here:
- OpenAIProvider Configurations
- GeminiProvider Configurations
- WebLlmProvider Configurations
> [!TIP]
> Note that if your choice of provider falls outside the default ones provided but has API specifications aligned to default providers (e.g. OpenAI), you may still use the default providers.
In addition, React ChatBotify's documentation website also contains live examples covering all of these default providers. You're strongly recommended to reference these examples:
- OpenAI Provider Live Example
- Gemini Provider Live Example
- WebLlm Live Example
Developers may also write custom providers to integrate with their own solutions by importing and implementing the Provider interface. The only method enforced by the interface is sendMessage, which returns an AsyncGenerator for the LlmConnector plugin to consume. A minimal example of a custom provider is shown below:
`javascript
import ChatBot from "react-chatbotify";
import { Provider } from "@rcb-plugins/llm-connector";
class MyCustomProvider implements Provider {
/**
* Streams or batch-calls Openai and yields each chunk (or the full text).
*
* @param messages messages to include in the request
* @param stream if true, yields each token as it arrives; if false, yields one full response
*/
public async *sendMessages(messages: Message[]): AsyncGenerator {
// obviously we should do something with the messages (e.g. call a proxy) but this is just an example
yield "Hello World!";
}
}
`
> [!TIP]
> Consider referencing the implementations for the default providers here if you're looking to create your own provider.
#### Ending LLM Conversations
Within the llmConnector attribute, there is a stopConditions property that accepts an object containing several types of stop conditions which developers may tap on to end LLM conversations. In the example below, llm_example_block uses both onUserMessage stop condition to check if the user sent a "FINISH" message, ang the onKeyDown stop condition to check if the "Escape" key is pressed. If either conditions are satisfied, the user is sent to the exit_block:
`javascript
import ChatBot from "react-chatbotify";
import LlmConnector, { LlmConnectorBlock, OpenaiProvider } from "@rcb-plugins/llm-connector";
const MyComponent = () => {
const flow = {
start: {
message: "What would you like to find out today?",
transition: 0,
path: "llm_example_block",
},
llm_example_block: {
llmConnector: {
provider: new OpenaiProvider({
mode: 'direct',
model: 'gpt-4.1-nano',
responseFormat: 'stream',
apiKey: // openai api key here,
}),
stopConditions: {
onUserMessage: (message: Message) => {
if (
typeof message.content === 'string' &&
message.content.toUpperCase() === 'FINISH'
) {
return 'start';
}
},
onKeyDown: (event: KeyboardEvent) => {
if (event.key === 'Escape') {
return 'start';
}
return null;
},
},
},
} as LlmConnectorBlock,
exit_block: {
message: "The LLM conversation has ended!",
chatDisabled: true,
options: ["Try Again"],
path: "llm_example_block",
}
// ... other blocks as necessary
};
return (
)
}
`
Currently, the plugin offers 2 stop conditions that is onUserMessage and onKeyDown:
| Stop Condition | Type | Default Value | Description |
|------------------------------|----------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------|
| onUserMessage | async function | null | This stop condition is triggered whenever a new message is sent by the user within the LlmConnectorBlock. It takes in a Message parameter representing the message that was sent and returns a string representing a path to go to or null, if remaining within the block. |
| onKeyDown | async function | null | This stop condition is triggered whenever a key down event is recorded (listens for keydown events). It takes in the KeyBoardEvent parameter and returns a string representing a path to go to or null`, if remaining within the block. |