N8N custom node for Dify Chat Model API with cache support - Compatible with AI Agent
npm install n8n-nodes-dify-chat-modeln8n-nodes-dify-chat-model
bash
npm install n8n-nodes-dify-chat-model
`
Then restart your N8N instance.
🚀 Usage
$3
1. Add the Dify Chat Model node to your workflow
2. Click on Credential to connect with
3. Click Create New Credential
4. Select Dify API
5. Fill in:
- API Key: Your Dify API key
- Base URL: https://api.dify.ai (or your self-hosted URL)
6. Click Save
$3
- App ID: Your Dify Application ID (required)
- User ID: User identifier (optional, default: "n8n-user")
- Conversation ID: To continue existing conversations (optional)
- Response Mode: "Blocking" (recommended for AI Agent)
- Input Variables: Additional variables as JSON object (optional)
- Query: Automatically picks up from AI Agent output
$3
Connect this node to the Chat Model output of your AI Agent node.
💡 Why Use This Node?
- 💰 Cost Savings: Dify's cache significantly reduces token usage compared to direct LLM calls
- 🔌 Compatibility: Works seamlessly with N8N's AI Agent
- 🌐 Flexibility: Supports both cloud and self-hosted Dify instances
- ⚡ Performance: Faster responses with cached results
📋 Requirements
- N8N version 1.0.0 or higher
- Dify API access (cloud or self-hosted)
- Dify Application ID
🔧 Development
`bash
Install dependencies
npm install
Build the project
npm run build
Watch mode for development
npm run dev
Lint
npm run lint
Format code
npm run format
``