PostHog analytics wrapper for Replicate SDK - track AI model usage with LLM observability
npm install posthog-replicatePostHog LLM observability for the Replicate SDK. Drop-in replacement that automatically tracks all your AI model calls.
``bash`
npm install posthog-replicate posthog-node
`typescript
import { Replicate } from 'posthog-replicate';
import { PostHog } from 'posthog-node';
const posthog = new PostHog('
const replicate = new Replicate({ posthog });
const output = await replicate.run('stability-ai/sdxl', {
input: { prompt: 'A sunset over mountains' },
posthogDistinctId: 'user_123'
});
await posthog.shutdown();
`
Every call sends a $ai_generation event to PostHog with model, latency, and input/output.
- run() - full execution with outputstream()
- - streaming responsespredictions.create()
- - async prediction creationpredictions.get()
- - prediction status polling (captures output when complete)
Tracking options are automatically linked between create() and get() calls:
`typescript
// Pass tracking options once at creation
const prediction = await replicate.predictions.create({
model: 'stability-ai/sdxl',
input: { prompt: 'A sunset' },
posthogDistinctId: 'user_123'
});
// Options are automatically inherited - no need to pass them again
const result = await replicate.predictions.get(prediction.id);
`
- predictions.list(), predictions.cancel()models.
- , deployments., hardware.* and other non-generation methods
- Call posthog.shutdown() before your app exits to flush pending events
- Input/output is tracked by default - use posthogPrivacyMode: true` to disable
- Requires Node.js 20+