Convert pprof profiling data into Markdown format for performance analysis
npm install pprof-to-mdConvert pprof profiling data into Markdown format for LLM-assisted performance analysis.
pprof-to-md transforms binary pprof profiles into structured Markdown that LLMs can analyze to identify performance bottlenecks, explain root causes, and suggest optimizations.
``bash`
npm install pprof-to-md
Or run directly:
`bash`
npx pprof-to-md profile.pb.gz
`bashBasic usage - analyze a CPU profile
pprof-to-md cpu-profile.pb.gz
$3
| Option | Description | Default |
|--------|-------------|---------|
|
-f, --format | Output format: summary, detailed, adaptive | adaptive |
| -t, --type | Profile type: cpu, heap, auto | auto |
| -o, --output | Output file (stdout if not specified) | - |
| -s, --source-dir | Source directory for code context | - |
| --no-source | Disable source code inclusion | false |
| --max-hotspots | Maximum hotspots to show | 10 |$3
`typescript
import { convert } from 'pprof-to-md'const markdown = convert('profile.pb.gz', {
format: 'adaptive',
profileType: 'cpu',
maxHotspots: 10
})
console.log(markdown)
`Output Formats
$3
Compact format for quick triage:
`markdown
PPROF Analysis: CPU
Profile:
profile.pb.gz
Duration: 30s | Samples: 45,231Top Hotspots (by self-time)
| Rank | Function | Self% | Cum% | Location |
|------|----------|-------|------|----------|
| 1 |
JSON.parse | 23.4% | 23.4% | |
| 2 | processRequest | 15.2% | 67.8% | handler.ts:142 |Key Observations
- Native
JSON.parse dominates (23.4% self-time)
`$3
Full context with annotated call trees:
`markdown
Call Tree (annotated flame graph)
> Legend:
[self% | cum%] function @ location[ 0.1% | 100.0%] (root)
└── [ 15.2% | 67.8%] processRequest @ handler.ts:142 ◀ HOTSPOT
└── [ 23.4% | 23.4%] JSON.parse @ ◀ HOTSPOT
Function Details
$3
Samples: 6,878 (15.2% self) | Cumulative: 30,678 (67.8%)
Callers:
handleHTTP
Callees: parseBody, validateSchema
`$3
Summary with drill-down sections and anchor links:
`markdown
Executive Summary
- Primary bottleneck:
JSON.parse (23.4% of CPU)
- Optimization potential: 🟢 HIGH (67% in application code)Top Hotspots
1.
JSON.parse (23.4%) → Details
2. processRequest (15.2%) → Details---
Detailed Analysis
$3
Call path:
handleHTTP → processRequest → parseBody → JSON.parse
Self-time: 23.4% (10,584 samples)
`Collecting Profiles
$3
`typescript
import * as pprof from '@datadog/pprof'
import { writeFileSync } from 'fs'
import { gzipSync } from 'zlib'// CPU profiling
pprof.time.start({ durationMillis: 30000 })
// ... run workload ...
const profile = await pprof.time.stop()
writeFileSync('cpu.pb.gz', gzipSync(profile.encode()))
// Heap profiling
pprof.heap.start(512 * 1024, 64)
// ... run workload ...
const heapProfile = await pprof.heap.profile()
writeFileSync('heap.pb.gz', gzipSync(heapProfile.encode()))
``- Node.js >= 22.6.0 (uses native TypeScript type stripping)
Apache-2.0