Zod-first helpers for OpenAI chat completion
npm install z-chat-completionType-safe helpers for building OpenAI chat completions that always match a Zod schema. zChatCompletion wraps the Chat Completions API with a template literal tag so you can keep prompts readable while guaranteeing the structure of the model response.
- Validate every model response against a Zod schema before your code sees it
- Use ergonomic template literal prompts with embedded dynamic values
- Swap between OpenAI models or reuse your own OpenAI client instance
- Built for Bun first, but works anywhere the official OpenAI SDK runs
Requires Bun ≥ 1.1.0 and an OPENAI_API_KEY with access to the Chat Completions API.
``bash`
bun add z-chat-completionor, with npm / pnpm / yarn
npm install z-chat-completion
`ts
import { z } from "zod";
import zChat from "z-chat-completion";
const { joke, rating } = await zChat(
z.object({
joke: z.string(),
rating: z.number().int().min(1).max(5),
}),
)
Tell me a ${"short"} joke about ${"bun"}.;
console.log(joke, rating);
`
Behind the scenes zChatCompletion renders the template literal, calls client.chat.completions.create, and parses the JSON response with the schema you provide. If the model returns malformed or empty content, a descriptive error is thrown so you can retry or surface the failure.
- schema: Any ZodObject describing the JSON shape you expect from the model.OpenAI
- options.client: (optional) Provide an existing client if you manage authentication elsewhere. Useful for testing or sharing rate limits.gpt-5-mini
- options.model: (optional) Name of the chat completion model to call. Defaults to .
The function returns a template literal tag (ZChatTemplate) that resolves to a parsed value whose type is inferred from the schema.
- Throws Error("NO_CONTENT...") when the API response does not include any message content.
- Throws the underlying Zod validation error if the JSON payload cannot be parsed or does not satisfy the schema. Catch these to implement retries, fallbacks, or richer logging.
`ts
import OpenAI from "openai";
import { z } from "zod";
import { zChatCompletion } from "z-chat-completion";
const schema = z.object({ summary: z.string() });
const client = new OpenAI({
apiKey: process.env.OPENAI_API_KEY!,
baseURL: "https://api.openai.com/v1",
});
const chat = zChatCompletion(schema, {
client,
model: "gpt-5-mini",
});
const { summary } = await chat
Summarize the latest release notes for this project.;`
- Supply your own client to share configuration (proxies, retries) across multiple helpers.
- Override model per template; see docs/model-select-2025-10.md for current guidance on model trade-offs.ZodObject
- Use any , including nested schemas and discriminated unions, to express complex outputs.
zChatCompletion accepts a mocked client, making it straightforward to exercise in tests:
`ts
import { z } from "zod";
import { describe, expect, it } from "bun:test";
import { zChatCompletion } from "z-chat-completion";
const schema = z.object({ name: z.string() });
const mockClient = {
chat: {
completions: {
create: async () => ({
choices: [{ message: { content: JSON.stringify({ name: "Ada" }) } }],
}),
},
},
};
describe("zChatCompletion", () => {
it("parses responses", async () => {
const chat = zChatCompletion(schema, { client: mockClient as any });
expect(await chatWho am I thinking of?).toEqual({ name: "Ada" });`
});
});
`bash`
bun install # install dependencies
bun test # run test suite
bun run build # emit compiled artifacts to ./dist
bun check # Bun type checker & lint pass
To try a live call against the API, execute bun src/index.ts (the inline smoke test only runs when the file is executed directly).
Releases use standard-version for changelog + versioning and bun publish for distribution.
`bash`
bun run prerelease # build + test
bun run release # bump version, tag, and publish (requires auth)
If you want to simulate the publish step locally without pushing, run bun run release -- --dry-run`.
MIT © snomiao