N8N node for AI token tracking and monitoring with sub-workflow execution capabilities
npm install n8n-nodes-token-trackerA powerful N8N custom node that provides comprehensive AI token usage tracking and monitoring capabilities with sub-workflow execution support. This middleware node sits between your workflow and AI language models to provide detailed insights into token consumption, costs, and usage patterns.
bash
npm install @custom/n8n-nodes-ai-token-tracking
`$3
1. Clone this repository
2. Run npm install to install dependencies
3. Run npm run build to compile the TypeScript code
4. Install the package in your N8N instance🏗️ Architecture
`
Input Data → AI Token Tracking Node → AI Model → Output Data + Tracking Metadata
↓
Sub-Workflow (Optional)
`🔧 Usage
$3
1. Add the Node: Drag the "AI Token Tracking" node into your workflow
2. Connect Inputs:
- Connect your data to the "Main Input"
- Connect an AI Language Model to the "AI Model" input
3. Configure Tracking: Set up token tracking preferences
4. Connect Outputs: Use both outputs - main data flow and AI model passthrough
$3
`typescript
// Tracking Configuration
{
enableInputTokens: true,
enableOutputTokens: true,
}// Sub-Workflow Configuration
{
enabled: true,
workflowId: "workflow-123",
trigger: "always", // always/threshold/interval
mode: "once", // once/each
waitForCompletion: true
}
`📊 Output Data
The node adds comprehensive tracking metadata:
`json
{
"originalData": "...",
"_aiTokenTracking": {
"sessionId": "workflow-node-123456-abc",
"trackingEnabled": true,
"timestamp": "2025-08-20T10:47:00.000Z",
"usage": {
"inputTokens": 150,
"outputTokens": 75,
"totalTokens": 225,
"estimatedCost": 0.000375,
"modelName": "gpt-4"
}
}
}
`🔬 Testing
`bash
npm test # Run all tests
npm run test:watch # Run tests in watch mode
npm run test:coverage # Run tests with coverage
`🛠️ Development
`bash
npm install # Install dependencies
npm run dev # Development mode with auto-reload
npm run build # Build for production
npm run lint # Run linting
``- Overhead: < 50ms per AI model call
- Memory: Efficient with configurable history limits
- Scalability: Handles concurrent executions
- Reliability: Robust error handling
1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Add tests for new functionality
5. Submit a pull request
MIT License - see LICENSE.md for details
- GitHub Issues
- Documentation
- Stack Overflow
---
Made with ❤️ for the N8N community