Polyfill for the Prompt API (`LanguageModel`) backed by Firebase AI Logic, Gemini API, OpenAI API, or Transformers.js.
npm install prompt-api-polyfillThis package provides a browser polyfill for the
Prompt API LanguageModel,
supporting dynamic backends:
- Firebase AI Logic (cloud)
- Google Gemini API (cloud)
- OpenAI API (cloud)
- Transformers.js (local after initial model download)
When loaded in the browser, it defines a global:
``js`
window.LanguageModel;
so you can use the Prompt API shape even in environments where it is not yet
natively available.
- Uses: firebase/ai SDK.window.FIREBASE_CONFIG
- Select by setting: .backends/defaults.js
- Model: Uses default if not specified (see
).
- Uses: @google/generative-ai SDK.window.GEMINI_CONFIG
- Select by setting: .backends/defaults.js
- Model: Uses default if not specified (see
).
- Uses: openai SDK.window.OPENAI_CONFIG
- Select by setting: .backends/defaults.js
- Model: Uses default if not specified (see
).
- Uses: @huggingface/transformers SDK.window.TRANSFORMERS_CONFIG
- Select by setting: .backends/defaults.js
- Model: Uses default if not specified (see
).
---
Install from npm:
`bash`
npm install prompt-api-polyfill
1. Create a Firebase project with Generative AI enabled.
2. Provide your Firebase config on window.FIREBASE_CONFIG.
3. Import the polyfill.
`html`
1. Get a Gemini API Key from
Google AI Studio.
2. Provide your API Key on window.GEMINI_CONFIG.
3. Import the polyfill.
`html`
1. Get an OpenAI API Key from the
OpenAI Platform.
2. Provide your API Key on window.OPENAI_CONFIG.
3. Import the polyfill.
`html`
1. Only a dummy API Key required (runs locally in the browser).
2. Provide configuration on window.TRANSFORMERS_CONFIG.
3. Import the polyfill.
`html`
---
Create a .env.json file (seedot_env.json
Configuring / .env.json)
and then use it from a browser entry point.
The included index.html demonstrates the full surface area of the polyfill,
including:
- LanguageModel.create() with optionsprompt()
- and promptStreaming()append()
- Multimodal inputs (text, image, audio)
- and measureInputUsage()onquotaoverflow
- Quota handling via clone()
- and destroy()
A simplified version of how it is wired up:
`html`
---
/ .env.jsonThis repo ships with a template file:
`jsonc
// dot_env.json
{
// For Firebase AI Logic:
"projectId": "",
"appId": "",
"modelName": "",
// For Firebase AI Logic OR Gemini OR OpenAI OR Transformers.js:
"apiKey": "",
// For Transformers.js:
"device": "webgpu",
"dtype": "q4f16",
}
`
You should treat dot_env.json as a template and create a real .env.json
that is not committed with your secrets.
Copy the template:
`bash`
cp dot_env.json .env.json
Then open .env.json and fill in the values.
For Firebase AI Logic:
`json`
{
"apiKey": "YOUR_FIREBASE_WEB_API_KEY",
"projectId": "your-gcp-project-id",
"appId": "YOUR_FIREBASE_APP_ID",
"modelName": "choose-model-for-firebase"
}
For Gemini:
`json`
{
"apiKey": "YOUR_GEMINI_CONFIG",
"modelName": "choose-model-for-gemini"
}
For OpenAI:
`json`
{
"apiKey": "YOUR_OPENAI_API_KEY",
"modelName": "choose-model-for-openai"
}
For Transformers.js:
`json`
{
"apiKey": "dummy",
"modelName": "onnx-community/gemma-3-1b-it-ONNX-GQA",
"device": "webgpu",
"dtype": "q4f16"
}
- apiKey:"dummy"
- Firebase AI Logic: Your Firebase Web API key.
- Gemini: Your Gemini API Key.
- OpenAI: Your OpenAI API Key.
- Transformers.js: Use .projectId
- / appId: Firebase AI Logic only.
- device: Transformers.js only. Either "webgpu" or "cpu".dtype
- : Transformers.js only. Quantization level (e.g., "q4f16").
- modelName (optional): The model ID to use. If not provided, the polyfillbackends/defaults.js
uses the defaults defined in .
> Important: Do not commit a real .env.json with productiondot_env.json
> credentials to source control. Use as the committed template.env.json
> and keep local.
Once .env.json is filled out, you can import it and expose it to the polyfill.window.TRANSFORMERS_CONFIG
See the Quick start examples above. For Transformers.js, ensure
you set .
---
Once the polyfill is loaded and window.LanguageModel is available, you can use
it as described in the
Prompt API documentation.
For a complete, end-to-end example, see the index.html file in this directory.
---
1. Install dependencies:
`bash`
npm install
2. Copy and fill in your config:
`bash`
cp dot_env.json .env.json
3. Serve index.html:`
bash`
npm start
You should see network requests to the backends logs.
---
The project includes a comprehensive test suite that runs in a headless browser.
Uses playwright to run tests in a real Chromium instance. This is the
recommended way to verify environmental fidelity and multimodal support.
`bash`
npm run test:browser
To see the browser and DevTools while testing, you can modify
vitest.browser.config.js to set headless: false.
---
If you want to add your own backend provider, these are the steps to follow.
Create a new file in the backends/ directory, for example,backends/custom.js. You need to extend the PolyfillBackend class and
implement the core methods that satisfy the expected interface.
`js
import PolyfillBackend from './base.js';
import { DEFAULT_MODELS } from './defaults.js';
export default class CustomBackend extends PolyfillBackend {
constructor(config) {
// config typically comes from a window global (e.g., window.CUSTOM_CONFIG)
super(config.modelName || DEFAULT_MODELS.custom.modelName);
}
// Check if the backend is configured (e.g., API key is present), if given
// combinations of modelName and options are supported, or, for local model,
// if the model is available.
static availability(options) {
return window.CUSTOM_CONFIG?.apiKey ? 'available' : 'unavailable';
}
// Initialize the underlying SDK or API client. With local models, use
// monitorTarget to report model download progress to the polyfill.
createSession(options, sessionParams, monitorTarget) {
// Return the initialized session or client instance
}
// Non-streaming prompt execution
async generateContent(contents) {
// contents: Array of { role: 'user'|'model', parts: [{ text: string }] }
// Return: { text: string, usage: number }
}
// Streaming prompt execution
async generateContentStream(contents) {
// Return: AsyncIterable yielding chunks
}
// Token counting for quota/usage tracking
async countTokens(contents) {
// Return: total token count (number)
}
}
`
The polyfill uses a "First-Match Priority" strategy based on global
configuration. You need to register your backend in the prompt-api-polyfill.js#backends
file by adding it to the static array:
`jswindow
// prompt-api-polyfill.js
static #backends = [
// ... existing backends
{
config: 'CUSTOM_CONFIG', // The global object to look for on `
path: './backends/custom.js',
},
];
Define the fallback model identity in backends/defaults.js. This is used whenmodelName
a user initializes a session without specifying a specific .
`js`
// backends/defaults.js
export const DEFAULT_MODELS = {
// ...
custom: { modelName: 'custom-model-pro-v1' },
};
The project uses a discovery script (scripts/list-backends.js) to generate.env-[name].json
test matrices. To include your new backend in the test runner, create a file (for example, .env-custom.json) in the root directory:
`json`
{
"apiKey": "your-api-key-here",
"modelName": "custom-model-pro-v1"
}
The final step is ensuring compliance. Because the polyfill is spec-driven, any
new backend should pass the official (or tentative) Web Platform Tests:
`bash`
npm run test:wpt
This verification step ensures that your backend handles things like
AbortSignal`, system prompts, and history formatting exactly as the Prompt API
specification expects.
---
Apache 2.0