Pino transport for sending logs to Coralogix
npm install pino-coralogixA Pino transport for sending logs to Coralogix.
- โก Worker Thread Support: Runs in separate thread via Pino's transport option (recommended)
- โ
TDD Approach: Built using Test-Driven Development with 54 tests
- ๐ Efficient Batching: Automatically batches logs to minimize network calls
- ๐ Auto-flush: Configurable batch size and time-based flushing
- ๐ฏ Type Mapping: Automatic mapping of Pino log levels to Coralogix severity
- ๐ฆ Size Awareness: Respects Coralogix's 2MB limit with 80% threshold detection
- ๐ Multi-region: Supports all Coralogix domains (US, EU, AP)
- ๐ Native HTTP: Uses undici for fast, modern HTTP requests
- ๐งช Well Tested: Comprehensive unit and integration tests
``bash`
npm install pino-coralogix
`javascript
import pino from 'pino';
// Create logger with Coralogix transport (runs in separate worker thread)
const logger = pino({
transport: {
target: 'pino-coralogix',
options: {
domain: 'us1',
apiKey: process.env.CORALOGIX_API_KEY,
applicationName: 'my-app',
subsystemName: 'api-service'
}
}
});
// Start logging
logger.info('Hello Coralogix!');
`
| Option | Type | Description |
|--------|------|-------------|
| domain | string | Coralogix domain: us1, us2, eu1, eu2, ap1, ap2, ap3 |apiKey
| | string | Your Coralogix Send-Your-Data API key |applicationName
| | string | Application name (used for grouping logs) |subsystemName
| | string | Subsystem name (used for grouping logs) |
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| computerName | string | hostname | Override the computer/host name |batchSize
| | number | 100 | Number of logs to batch before sending |flushInterval
| | number | 1000 | Time in ms between automatic flushes |timeout
| | number | 30000 | HTTP request timeout in ms |maxRetries
| | number | 3 | Maximum number of retry attempts |maxBatchSizeBytes
| | number | 2097152 | Max batch size in bytes (2MB) |onError
| | function | - | Callback for handling errors |
This is the preferred method as it runs the transport in a separate worker thread, keeping your main application thread free from I/O operations:
`javascript
import pino from 'pino';
const logger = pino({
transport: {
target: 'pino-coralogix',
options: {
domain: 'us1',
apiKey: process.env.CORALOGIX_API_KEY,
applicationName: 'my-app',
subsystemName: 'api-service',
batchSize: 100,
flushInterval: 1000
}
}
});
logger.info('Application started');
logger.warn({ userId: 123 }, 'User session expired');
logger.error(new Error('Connection failed'), 'Database error');
`
For special cases where you need direct control over the transport:
`javascript
import pino from 'pino';
import { build } from 'pino-coralogix';
const transport = await build({
domain: 'us1',
apiKey: process.env.CORALOGIX_API_KEY,
applicationName: 'my-app',
subsystemName: 'api-service'
});
const logger = pino(transport);
logger.info('Hello Coralogix!');
`
> Note: This method runs in the same thread as your application and may impact performance under high log volume.
Coralogix supports additional fields for better log organization:
`javascript`
logger.info({
category: 'authentication',
className: 'AuthService',
methodName: 'login',
threadId: 'worker-1'
}, 'User logged in successfully');
`javascript`
const transport = await build({
domain: 'us1',
apiKey: process.env.CORALOGIX_API_KEY,
applicationName: 'my-app',
subsystemName: 'api-service',
onError: (error) => {
console.error('Failed to send logs to Coralogix:', error);
}
});
`javascript`
const transport = await build({
domain: 'eu1',
apiKey: process.env.CORALOGIX_API_KEY,
applicationName: 'high-volume-app',
subsystemName: 'worker',
batchSize: 500, // Send larger batches
flushInterval: 500 // Flush more frequently
});
`javascript
process.on('SIGTERM', async () => {
logger.info('Shutting down...');
// Flush remaining logs
await new Promise((resolve) => {
logger.flush(() => {
transport.end(() => {
console.log('All logs sent');
resolve();
});
});
});
process.exit(0);
});
`
Pino levels are automatically mapped to Coralogix severity levels:
| Pino Level | Pino Value | Coralogix Severity | Coralogix Value |
|------------|------------|-------------------|-----------------|
| trace | 10 | Debug | 1 |
| debug | 20 | Verbose | 2 |
| info | 30 | Info | 3 |
| warn | 40 | Warn | 4 |
| error | 50 | Error | 5 |
| fatal | 60 | Critical | 6 |
1. Worker Thread (when using transport option): Pino spawns a worker thread for the transport
2. Streaming: Pino writes JSON logs to the transport stream
3. Transformation: Each log is transformed to Coralogix format
4. Batching: Logs accumulate in memory until batch size or time threshold
5. Flushing: Batches are sent to Coralogix via HTTP POST
6. Auto-flush: Remaining logs are flushed on stream end
Using Pino's transport option runs the transport in a separate worker thread, which:
- โ
Keeps your main application thread free from I/O blocking
- โ
Prevents HTTP requests from impacting application performance
- โ
Allows logs to be processed asynchronously without backpressure
- โ
Is the recommended pattern for production use
- Size-based: Flush when batchSize logs accumulatedflushInterval
- Time-based: Flush every millisecondsmaxBatchSizeBytes
- Capacity-based: Flush when 80% of reached
- On close: Flush all remaining logs when transport closes
Creates a Pino transport for Coralogix.
Parameters:
- options (Object): Configuration options
Returns:
- Promise: A transform stream for Pino
Example:
`javascript`
const transport = await build({
domain: 'us1',
apiKey: 'your-api-key',
applicationName: 'my-app',
subsystemName: 'api'
});
This transport was built using Test-Driven Development (TDD):
`bashRun all tests
npm test
Test coverage includes:
- โ
Transport initialization and configuration validation
- โ
Log transformation (Pino โ Coralogix format)
- โ
HTTP client with request mocking
- โ
Batching logic and flush triggers
- โ
End-to-end integration tests
Performance
- Batching: Reduces network overhead by sending multiple logs per request
- Async I/O: Non-blocking HTTP requests using undici
- Smart Flushing: 80% capacity threshold prevents size limit errors
- Memory Efficient: Streams logs without buffering entire payload
Troubleshooting
$3
1. Check API Key: Ensure your API key is correct
2. Verify Domain: Use the correct domain for your Coralogix account
3. Check Flush: Logs are batched; wait for flush or manually flush
4. Review Errors: Use
onError callback to see error messages$3
- Reduce
batchSize to flush more frequently
- Reduce flushInterval to flush sooner
- Check for slow network causing batch accumulation$3
- Check
maxBatchSizeBytes isn't being exceeded
- Look for HTTP errors (401, 413, 429, 500)
- Ensure transport is properly closed on shutdownLicense
Apache 2.0
Contributing
Contributions are welcome! Please ensure:
- All tests pass (
npm test`)- Pino - Fast JSON logger
- Coralogix - Log analytics platform
- pino-abstract-transport - Base transport
For issues related to:
- This transport: Open an issue on GitHub
- Pino: See Pino documentation
- Coralogix: Contact Coralogix support