AI-assisted release authoring tool that generates a changelog, release notes, and a next version number (evidence-backed from git diff).
npm install ai-publishgit diff ..HEAD. The system may still use additional bounded context (e.g. file snippets, searches, and optional commit-message metadata) to understand more.
prepublish → postpublishprepublish prepares release outputs (and the next version) locally.
postpublish publishes, then finalizes git state (commit + tag + push).
bash
npm install --save-dev ai-publish
`
$3
ai-publish requires an LLM provider for prepublish, changelog, and release-notes.
Before running the CLI, choose a provider (openai or azure) and set the required environment variables (see the “LLM providers” section below).
- OpenAI: set OPENAI_API_KEY and OPENAI_MODEL
- Azure OpenAI: set AZURE_OPENAI_ENDPOINT, AZURE_OPENAI_API_KEY, and AZURE_OPENAI_DEPLOYMENT
In the repo you want to release:
`bash
npx ai-publish prepublish --llm
build/package step depends on your ecosystem
npx ai-publish postpublish
`
$3
Publishing is the part most likely to fail or require interaction (credentials, OTP/2FA, network, registry errors). ai-publish splits the flow so your git history and tags stay correct:
- prepublish can safely generate outputs and compute v without creating a “release commit” or tag.
- postpublish runs the actual publish step first, and only after publish succeeds does it create the release commit and annotated v tag and push them.
If publishing fails, you do not end up with a pushed release tag that doesn’t correspond to a published artifact.
$3
prepublish:
- Requires a clean worktree.
- Refuses if HEAD is already tagged with a v tag.
- Writes release outputs to disk:
- changelog (default CHANGELOG.md, overridable via --out)
- release notes at release-notes/v
- optional manifest version update (disabled via --no-write)
- Writes an intent file: .ai-publish/prepublish.json.
- Does not create a git commit.
- Does not create a git tag.
- Does not push anything.
postpublish:
- Requires .ai-publish/prepublish.json (i.e., you must run prepublish first).
- Runs the project-type publish step first.
- After publish succeeds, it:
- creates a release commit containing only the prepared release paths
- commit message: chore(release): v
- creates an annotated tag v pointing at that commit
- pushes the current branch and the tag to the remote (default origin)
- Refuses if your working tree has changes outside the release output paths recorded by prepublish.
Recommended release flow
$3
`bash
npx ai-publish prepublish --llm
npm run build
npx ai-publish postpublish
`
$3
`bash
npx ai-publish prepublish --project-type dotnet --manifest path/to/MyProject.csproj --llm
dotnet pack -c Release
npx ai-publish postpublish --project-type dotnet --manifest path/to/MyProject.csproj
`
$3
`bash
npx ai-publish prepublish --project-type rust --manifest Cargo.toml --llm
cargo publish --dry-run
npx ai-publish postpublish --project-type rust --manifest Cargo.toml
`
$3
`bash
npx ai-publish prepublish --project-type python --manifest pyproject.toml --llm
python -m build
npx ai-publish postpublish --project-type python --manifest pyproject.toml
`
$3
`bash
npx ai-publish prepublish --project-type go --manifest go.mod --llm
build/test as needed
npx ai-publish postpublish --project-type go --manifest go.mod
`
One-off generation (without publishing)
If you only want to generate markdown (no publish step, no commit/tag/push), you can run the generators directly:
`bash
npx ai-publish changelog --llm openai
npx ai-publish release-notes --llm openai
`
- changelog writes CHANGELOG.md by default.
- release-notes writes to release-notes/v by default when --out is omitted and you are not using an explicit --base (or release-notes/ if HEAD is already tagged).
Quickstart (from source)
If you’re developing ai-publish itself:
`bash
npm install
npm run build
`
Then, from the target repo:
`bash
node /path/to/ai-publish/dist/cli.js changelog --llm openai
`
Core invariants
- Sole authority for what changed is git diff .
- The diff is indexed and queryable.
- Binary diffs are metadata-only.
- The full diff is never returned by APIs; callers must request bounded hunks by ID.
These rules are the point of the tool: they make output auditable and make prompt-injection style attacks much harder (because downstream analysis can only “see” bounded evidence).
Diff index storage (.ai-publish)
ai-publish persists a bounded diff index under .ai-publish/diff-index/.
- The manifest is metadata-only (manifest.json).
- Each hunk is stored as a separate .patch file (bounded and possibly truncated).
Important: these hunk files are still derived from your repo’s diff and may contain sensitive information (secrets, credentials, proprietary code). The repo’s .gitignore should exclude .ai-publish/ (this repo does).
If you want the diff index stored elsewhere (for example in a temp directory, encrypted volume, or CI workspace scratch area), pass:
`bash
npx ai-publish changelog --llm --index-root-dir
npx ai-publish release-notes --llm --index-root-dir
npx ai-publish prepublish --llm --index-root-dir
`
Practical defaults:
- CI runners: set --index-root-dir to a workspace scratch directory that is guaranteed writable.
- Local runs:
- Linux/macOS: --index-root-dir /tmp/ai-publish
- Windows (PowerShell): --index-root-dir "$env:TEMP\ai-publish"
How it works (high level)
1. indexDiff() runs git diff with rename detection and builds an index under .ai-publish/diff-index/.
2. Each diff hunk is stored as its own file in hunks/.
3. The index manifest (manifest.json) contains only metadata + hunk IDs (never full patch content).
4. getDiffHunks({ hunkIds }) returns only requested hunks, enforcing a total byte limit.
For changes that have no textual hunks (e.g. rename-only), ai-publish creates a metadata-only @@ meta @@ pseudo-hunk so downstream output can still attach explicit evidence.
This also applies to binary diffs and other hunkless changes: evidence is represented as metadata only.
$3
Both changelog and release-notes follow the same three-pass structure:
1. Pass 1: Mechanical (metadata → notes)
The model is given only deterministic, metadata-only inputs:
- the diff summary (file list, change types, basic stats)
- evidence nodes (file-level nodes with hunk IDs)
- deterministic “mechanical facts” (counts + a per-file index, still no patch text)
It outputs a list of “mechanical notes” — a compact intermediate representation of what changed.
2. Pass 2: Semantic (tool-gated, budgeted retrieval)
The model may request _bounded_ additional context to interpret impact, via a restricted tool surface:
- getDiffHunks(hunkIds) (only hunk IDs that exist in the evidence set are allowed)
- bounded repo context (HEAD-only): file snippets, “snippet around”, file metadata
- bounded repo search: path search, file search, repo-wide text search, file listing
All tool outputs are budgeted globally (byte caps), and the pipeline refuses requests once a budget is exhausted.
Optional commit-message context for base..HEAD can be included, but it is explicitly treated as untrusted and never as evidence.
3. Pass 3: Editorial (structured output + guardrails)
- For changelog, the model must output a structured changelog model (Keep a Changelog style) where every bullet references explicit evidence node IDs.
- The pipeline repairs/dedupes bullets deterministically, conservatively fixes invalid/missing evidence references, and applies deterministic breaking-change heuristics.
- Coverage guardrail: if any evidence node is not referenced by at least one bullet, the pipeline injects an auto-generated bullet (e.g. “Updated .”) so the changelog covers the entire base..HEAD diff.
- The model is validated (no unknown evidence references), then rendered to markdown using the HEAD commit date as the release date.
- For release-notes, the model outputs human-facing markdown plus a list of evidence node IDs supporting that markdown.
- The pipeline refuses “markdown with zero evidence” (it will not implicitly attach all evidence).
- Rendering prefers a real v tag at HEAD when available, to avoid emitting “Unreleased” for already-tagged releases.
Code pointers:
- Pipelines: src/pipeline/runChangelogPipeline.ts, src/pipeline/runReleaseNotesPipeline.ts
- LLM contract: src/llm/types.ts
- Diff indexing + bounded hunk retrieval: src/diff/indexDiff.ts, src/diff/getDiffHunks.ts
- Evidence construction: src/changelog/evidence.ts
- Changelog validation + rendering: src/changelog/validate.ts, src/changelog/renderKeepAChangelog.ts
- Deterministic mechanical facts: src/llm/deterministicFacts.ts
$3
The next version recommendation is computed deterministically from the changelog model:
- major if there are any breakingChanges
- minor if there are any added entries
- patch if there are any changed/fixed/removed entries
- none if the diff is internal-only
Then ai-publish computes nextVersion using semver rules (including prerelease handling when the previous version is a prerelease). The LLM is only used to produce a human-readable justification, and it is required to repeat the same nextVersion — if it disagrees, the pipeline fails.
Code pointers:
- Version bump pipeline: src/pipeline/runVersionBumpPipeline.ts (and src/pipeline/runPrepublishPipeline.ts)
- Bump type + semver calculation: src/version/bump.ts
- Tag-based base resolution: src/version/resolveVersionBase.ts
Versioning (git tags)
ai-publish treats git tags of the form v as the source of truth for release versions.
- For changelog and release-notes: if --base is omitted, the diff base defaults to the most recent reachable v tag commit (otherwise the empty tree).
- For prepublish and version bumping: if no version tags exist, ai-publish infers previousVersion from the selected manifest (or you can set it explicitly via --previous-version), and it may infer a base commit from manifest history when possible.
- If your repo has no tags and the manifest is already bumped to the next version, use --previous-version-from-manifest-history to infer the previous distinct version from the manifest's git history.
- prepublish computes a predicted v and prepares release outputs locally.
- postpublish creates a local release commit and an annotated tag v pointing at that commit after publish succeeds, then pushes the branch + tag.
- Manifests (e.g. package.json, .csproj) are updated to match v (unless --no-write).
CLI
LLM mode is required for changelog, release-notes, and prepublish: you must pass --llm openai or --llm azure.
postpublish does not use the LLM and does not accept --llm.
LLM providers are mentioned below; OpenAI is listed first.
`text
ai-publish changelog [--base ] [--out ] [--index-root-dir ] [--public-path-prefix ] [--public-file-path ] [--internal-path-prefix ] --llm [--commit-context ] [--commit-context-bytes ] [--commit-context-commits ] [--debug]
ai-publish release-notes [--base ] [--previous-version ] [--out ] [--index-root-dir ] [--public-path-prefix ] [--public-file-path ] [--internal-path-prefix ] --llm [--commit-context ] [--commit-context-bytes ] [--commit-context-commits ] [--debug]
ai-publish prepublish [--base ] [--previous-version ] [--previous-version-from-manifest-history] [--project-type ] [--manifest ] [--package ] [--no-write] [--out ] [--index-root-dir ] [--public-path-prefix ] [--public-file-path ] [--internal-path-prefix ] --llm [--debug]
ai-publish postpublish [--project-type ] [--manifest ] [--publish-command ] [--skip-publish] [--debug]
ai-publish --help
`
Postpublish publish control:
- --publish-command : run your own publish step before commit/tag/push.
- --skip-publish: skip the built-in publish step entirely.
$3
- changelog
- Default output path: CHANGELOG.md
- Writes the changelog markdown, then prints a JSON summary (base resolution, tags, etc.).
- If the output file already exists, prepends the newly generated version entry at the top (full history).
- Special case: ## [Unreleased] is replaced (upsert) rather than duplicated.
- Legacy # Changelog ( headers are migrated to a ## [ section when possible.
- release-notes
- If --out is provided, writes exactly there.
- If --out is not provided:
- If HEAD is already tagged v, writes release-notes/.
- Otherwise (most common, when --base is omitted), computes the next version tag and writes release-notes/v.
- If you pass an explicit --base and HEAD is not tagged, the default output remains RELEASE_NOTES.md.
- Always prints a JSON summary.
- prepublish
- Refuses to run if the git worktree is dirty.
- Refuses to run if HEAD is already tagged with a version tag.
- If no version tags exist, infers previousVersion from the selected manifest (or use --previous-version).
- For --project-type go without tags, you must pass --previous-version.
- Writes:
- changelog (default CHANGELOG.md, overridable via --out)
- release notes under release-notes/v
- optionally updates the selected manifest version (disabled via --no-write)
- Does not create a commit or tag (those are created by postpublish after publish succeeds).
- Prints a JSON result to stdout (it does not print the markdown).
- --package is a backwards-compatible alias for npm manifests; it implies --project-type npm.
Changelog behavior:
- If the changelog output file already exists, prepublish prepends the newly generated version entry at the top (full history).
- Legacy # Changelog ( headers are migrated to a ## [ section when possible.
- postpublish
- Requires .ai-publish/prepublish.json (i.e., you must run prepublish first).
- Requires being on a branch (not detached HEAD).
- Runs a project-type-specific publish step.
- After publish succeeds, creates a release commit + annotated v tag, then pushes the branch + tag.
- Prints a JSON result to stdout.
- Note: --llm is not accepted for postpublish.
$3
ai-publish prints machine-readable JSON to stdout for several commands. To keep stdout parseable, all logs are written to stderr.
Environment variables:
- AI_PUBLISH_LOG_LEVEL: silent | info | debug | trace (default: info for CLI runs, silent for programmatic usage)
- AI_PUBLISH_TRACE_TOOLS=1: logs which bounded semantic tools were called, along with request counts and budget usage (no full diff/snippet dumping)
- AI_PUBLISH_TRACE_LLM=1: logs LLM request/response metadata (provider + label + sizes)
- AI_PUBLISH_TRACE_LLM_OUTPUT: prints raw structured LLM outputs (truncated) to stderr (enabled by default for CLI runs; set to 0 to disable)
$3
#### npm
1. ai-publish prepublish --llm
2. Build your package.
3. ai-publish postpublish
#### .NET
1. ai-publish prepublish --project-type dotnet --manifest path/to/MyProject.csproj --llm
2. Build.
3. ai-publish postpublish --project-type dotnet --manifest path/to/MyProject.csproj
$3
- npm: runs npm publish
- dotnet: pushes already-built packages from bin/Release using dotnet nuget push.
- It only pushes the .nupkg matching the predictedTag from prepublish (to avoid accidentally re-publishing old packages left in the build output).
- By default, it does not pass --source, so dotnet nuget push uses your nuget.config (e.g. defaultPushSource).
- To override, set AI_PUBLISH_NUGET_SOURCE (or NUGET_SOURCE).
- Configure auth with AI_PUBLISH_NUGET_API_KEY (or NUGET_API_KEY).
- For Azure DevOps Artifacts, set AI_PUBLISH_NUGET_SOURCE to your feed v3 URL and use a PAT as the API key.
- rust: runs cargo publish
- go: no publish command (the “publish” is the pushed tag)
- python: runs python -m build then python -m twine upload dist/*
Surface classification overrides
Surface classification (what counts as “user-facing” vs “internal-only”) can be overridden explicitly via CLI flags:
- --public-path-prefix (repeatable)
- --public-file-path (repeatable)
- --internal-path-prefix (repeatable)
Programmatic consumers can pass the equivalent defaultClassifyOverrides.
Programmatic usage (TS/JS)
The same functionality is available as a library API with CLI-equivalent parameters.
$3
For programmatic use, you may optionally provide your own llmClient implementation (alternate providers, wrappers/instrumentation, caching, or network-free tests). When llmClient is provided, it is used instead of constructing the default client from environment variables.
`ts
import { generateChangelog, generateReleaseNotes } from "ai-publish"
await generateChangelog({
llm: "openai"
// llmClient: myCustomClient,
// base: "",
// outPath: "CHANGELOG.md",
// indexRootDir: "/tmp/ai-publish",
// cwd: process.cwd(),
})
await generateReleaseNotes({
llm: "openai"
// llmClient: myCustomClient,
// base: "",
// outPath: "RELEASE_NOTES.md",
// indexRootDir: "/tmp/ai-publish",
// cwd: process.cwd(),
})
`
LLM providers
$3
Set environment variables:
- OPENAI_API_KEY
- OPENAI_MODEL (a chat model that supports JSON-schema structured outputs)
- OPENAI_BASE_URL (optional; default https://api.openai.com/v1)
- OPENAI_TIMEOUT_MS (optional)
Note: OpenAI mode uses Structured Outputs (JSON schema). Your selected model must support response_format: { type: "json_schema", ... } for Chat Completions.
$3
Set environment variables:
- AZURE_OPENAI_ENDPOINT (e.g. https://)
- AZURE_OPENAI_API_KEY
- AZURE_OPENAI_DEPLOYMENT (your chat model deployment name)
- AZURE_OPENAI_API_VERSION (optional; default 2024-08-01-preview)
- AZURE_OPENAI_TIMEOUT_MS (optional)
Note: LLM mode uses Structured Outputs (JSON schema) and requires Azure OpenAI API versions 2024-08-01-preview or later.
Testing
- npm test runs network-free unit + integration tests.
- End-to-end changelog and release notes generation are covered by integration tests that create temporary git repo fixtures and use a local stub LLM client so outputs are stable without network calls.
$3
Additional integration tests can ask Azure OpenAI to judge whether the generated changelog/release notes accurately reflect the evidence.
- Opt-in and skipped by default (so CI remains deterministic and network-free).
- Local-only: skipped when CI is set.
- Run with npm run test:llm-eval (requires the Azure env vars listed above).
- Internally gated by AI_PUBLISH_LLM_EVAL=1 (the script sets it for you).
The evaluator uses structured JSON output with this schema:
- { "accepted": boolean, "reason": string | null }
$3
An additional integration test can ask Azure OpenAI to generate changelog / release notes output end-to-end.
- Opt-in and skipped by default (so CI remains deterministic and network-free).
- Local-only: skipped when CI is set.
- Run with npm run test:llm-generate (requires the Azure env vars listed above).
- Internally gated by AI_PUBLISH_LLM_GENERATE=1 (the script sets it for you).
$3
If you change any of the following, run both npm run test:llm-eval and npm run test:llm-generate in addition to npm test:
- src/llm/* (Azure/OpenAI clients)
- LLM pipeline orchestration in src/pipeline/*
- Output schemas/contracts used by the LLM passes
Troubleshooting
- Missing required flag: --llm
- changelog, release-notes, and prepublish require --llm openai or --llm azure.
- HEAD is already tagged ... Refusing to prepublish twice.
- prepublish is intentionally one-shot per version. Move HEAD forward or delete the tag if you’re intentionally retrying.
- No user-facing changes detected (bumpType=none). Refusing to prepare a release.
- ai-publish refuses to cut a release if the changelog model has no user-facing changes.
- Missing .ai-publish/prepublish.json. Run prepublish first.
- postpublish requires the intent file written by prepublish.
- .NET postpublish requires --manifest
- Provide --manifest for dotnet project type.
- Missing NuGet API key...
- Set AI_PUBLISH_NUGET_API_KEY (or NUGET_API_KEY) before running dotnet postpublish.
- Semantic pass request: expected JSON but got: ...
- This usually means the LLM provider returned extra text, multiple JSON objects, or truncated output due to output token limits.
- ai-publish runs the semantic “tool request” phase in small batches across multiple rounds; if you still see this intermittently, enable request/response tracing to diagnose provider behavior:
- AI_PUBLISH_TRACE_LLM=1
- AI_PUBLISH_LOG_LEVEL=debug
- On Azure, ensure AZURE_OPENAI_API_VERSION is 2024-08-01-preview` or later (Structured Outputs).