Node-RED plugin for developing with LLM using Ollama
npm install @background404/node-red-contrib-llm-plugin~/.node-red):
bash
npm install @background404/node-red-contrib-llm-plugin
`
Restart Node-RED to load the plugin.
Configuration
Open the LLM Plugin sidebar tab and click the Settings (gear icon).
$3
1. Ensure Ollama is installed and running (ollama serve).
- To allow access from other devices, set OLLAMA_HOST=0.0.0.0 before starting Ollama.
2. Select Ollama as the provider.
3. Enter the Ollama URL (default: http://localhost:11434).
4. Set the model name directly in the chat interface (e.g., llama3, mistral).
$3
1. Select OpenAI as the provider.
2. Enter your API Key.
- Note: Your API key is stored in Node-RED's internal configuration (typically ~/.node-red/.config.runtime.json) and is NOT included when exporting flows.
3. Set the model name directly in the chat interface (e.g., gpt-4o, gpt-4-turbo).
- Warning: OpenAI API usage incurs costs. Sending large flows as context can consume significant tokens.
Usage
1. Open the Sidebar: Select "LLM Plugin" from the sidebar dropdown.
2. Chat: Type your question or request.
3. Context Awareness: The plugin automatically includes a summary of your currently active flow to help the LLM understand your context.
4. Importing: If the LLM generates a flow (in a JSON code block), an "Import Flow" button will appear. Click it to add the nodes to your current workspace.
Notes & Limitations
- Development Status: This plugin is currently in development.
- Model Behavior: Responses may vary depending on the LLM model used. Occasionally, the generated flow might be incomplete or incorrect.
- Import Safety: The importer automatically strips out tab` nodes to prevent creating unnamed tabs. If an import fails, try asking the LLM to regenerate the JSON.