ChatGPT integration for Nuxt 3
npm install nuxt-chatgpt
chat, and chatCompletion methods through the useChatgpt() composable. Additionally, the module guarantees security as requests are routed through a Nitro Server, thus preventing the exposure of your API Key. The module use openai library version 4.0.0 behind the scene.
useChatgpt() composable that grants easy access to the chat, and chatCompletion, and generateImage methods.
chatCompletionStream for real-time streamed responses (SSE).
sh
npm install --save-dev nuxt-chatgpt
`
* pnpm
`sh
pnpm add -D nuxt-chatgpt
`
* yarn
`sh
yarn add --dev nuxt-chatgpt
`
2. Add nuxt-chatgpt to the modules section of nuxt.config.ts
`js
export default defineNuxtConfig({
modules: ["nuxt-chatgpt"],
// entirely optional
chatgpt: {
apiKey: 'Your apiKey here goes here'
},
})
`
That's it! You can now use Nuxt Chatgpt in your Nuxt app 🔥
Usage & Examples
To access the chat, chatCompletion, chatCompletionStream, and generateImage methods in the nuxt-chatgpt module, you can use the useChatgpt() composable, which provides easy access to them.
The chat, and chatCompletion methods requires three parameters:
| Name | Type | Default | Description |
| ------------ | -------- | -------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| message | String | available only for chat() | A string representing the text message that you want to send to the GPT model for processing. |
| messages | Array | available only for chatCompletion() and chatCompletionStream() | An array of objects that contains role and content |
| model | String | gpt-5-mini for chat() and gpt-5-mini for chatCompletion() | Represent certain model for different types of natural language processing tasks. |
| options | Object | { temperature: 0.5, max_tokens: 2048, top_p: 1 frequency_penalty: 0, presence_penalty: 0 } | An optional object that specifies any additional options you want to pass to the API request, such as, the number of responses to generate, and the maximum length of each response. |
The generateImage method requires one parameters:
| Name | Type | Default | Description |
| ----------- | -------- | -------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| message | String | A text description of the desired image(s). The maximum length is 1000 characters. | |
| model | String | gpt-image-1-mini | The model to use for image generation. |
| options | Object | { n: 1, quality: 'standard', response_format: 'url', size: '1024x1024', style: 'natural' } | An optional object that specifies any additional options you want to pass to the API request, such as, the number of images to generate, quality, size and style of the generated images. |
Available models:
* text-davinci-002
* text-davinci-003
* gpt-3.5-turbo
* gpt-3.5-turbo-0301
* gpt-3.5-turbo-1106
* gpt-4
* gpt-4o
* gpt-4o-mini
* gpt-4-turbo
* gpt-4-1106-preview
* gpt-4-0314
* gpt-4-0314
* gpt-4-0613
* gpt-4-32k
* gpt-4-32k-0314
* gpt-4-32k-0613
* gpt-5-nano
* gpt-5-mini
* gpt-5-pro
* gpt-5.1
* gpt-5.2-pro
* gpt-5.2
* dall-e-3
* gpt-image-1
* gpt-image-1-mini
* gpt-image-1.5
$3
In the following example, the model is unspecified, and the gpt-4o-mini model will be used by default.
`js
const { chat } = useChatgpt()
const data = ref('')
const inputData = ref('')
async function sendMessage() {
try {
const response = await chat(inputData.value)
data.value = response
} catch(error) {
alert(Verify your organization if you want to use GPT-5 models: ${error})
}
}
`
`html
@click="sendMessage"
v-text="'Send'"
/>
{{ data }}
`
$3
`js
const { chat } = useChatgpt()
const data = ref('')
const inputData = ref('')
async function sendMessage() {
try {
const response = await chat(inputData.value, 'gpt-5-mini')
data.value = response
} catch(error) {
alert(Verify your organization if you want to use GPT-5 models: ${error})
}
}
`
`html
@click="sendMessage"
v-text="'Send'"
/>
{{ data }}
`
$3
In the following example, the model is unspecified, and the gpt-4o-mini model will be used by default.
`js
const { chatCompletion } = useChatgpt()
const chatTree = ref([])
const inputData = ref('')
async function sendMessage() {
try {
const message = {
role: 'user',
content: ${inputData.value},
}
chatTree.value.push(message)
const response = await chatCompletion(chatTree.value)
const responseMessage = {
role: response[0].message.role,
content: response[0].message.content
}
chatTree.value.push(responseMessage)
} catch(error) {
alert(Verify your organization if you want to use GPT-5 models: ${error})
}
}
`
`html
@click="sendMessage"
v-text="'Send'"
/>
v-for="chat in chatTree"
:key="chat"
>
{{ chat.role }} :
{{ chat.content }}
$3
`js
const { chatCompletion } = useChatgpt()
const chatTree = ref([])
const inputData = ref('')
async function sendMessage() {
try {
const message = {
role: 'user',
content: ${inputData.value},
}
chatTree.value.push(message)
const response = await chatCompletion(chatTree.value, 'gpt-5-mini')
const responseMessage = {
role: response[0].message.role,
content: response[0].message.content
}
chatTree.value.push(responseMessage)
} catch(error) {
alert(Verify your organization if you want to use GPT-5 models: ${error})
}
}
`
`html
@click="sendMessage"
v-text="'Send'"
/>
v-for="chat in chatTree"
:key="chat"
>
{{ chat.role }} :
{{ chat.content }}
$3
In the following example, the model is unspecified, and the gpt-4o-mini model will be used by default.
`js
const { chatCompletionStream } = useChatgpt()
const chatTree = ref([])
const inputData = ref('')
async function sendStreamedMessage() {
try {
const message = {
role: 'user',
content: ${inputData.value},
}
chatTree.value.push(message)
const assistantMessage = {
role: 'assistant',
content: ''
}
chatTree.value.push(assistantMessage)
// IMPORTANT: do not send the placeholder assistant message to the server
const payloadMessages = chatTree.value.slice(0, -1)
await chatCompletionStream(payloadMessages, undefined, undefined, {
onToken(token) {
assistantMessage.content += token
},
onDone() {
// streaming finished
},
onError(err) {
alert(Stream error: ${typeof err === "string" ? err : err?.message || "Unknown"})
}
})
} catch(error) {
alert(Verify your organization if you want to use GPT-5 models: ${error})
}
}
`
`html
@click="sendStreamedMessage"
v-text="'Send Streamed'"
/>
v-for="chat in chatTree"
:key="chat"
>
{{ chat.role }} :
{{ chat.content }}
$3
In the following example, the model is unspecified, and the gpt-image-1-mini model will be used by default.
`js
const { generateImage } = useChatgpt()
const images = ref([])
const inputData = ref('')
const loading = ref(false)
function b64ToBlobUrl(b64) {
const bytes = Uint8Array.from(atob(b64), (c) => c.charCodeAt(0));
const blob = new Blob([bytes], { type: "image/png" });
return URL.createObjectURL(blob);
}
async function sendPrompt() {
loading.value = true;
try {
const result = await generateImage(inputData.value);
images.value = result.map((img) => ({
url: b64ToBlobUrl(img.b64_json),
}));
} catch (error) {
alert(Error: ${error});
}
loading.value = false;
}
`
`html
@click="sendPrompt"
v-text="'Send Prompt'"
/>
Generating, please wait ...
`
$3
`js
const { generateImage } = useChatgpt()
const images = ref([])
const inputData = ref('')
const loading = ref(false)
async function sendPrompt() {
loading.value = true
try {
images.value = await generateImage(inputData.value, 'dall-e-3', {
n: 1,
quality: 'standard',
response_format: 'url',
size: '1024x1024',
style: 'natural'
})
} catch (error) {
alert(Error: ${error})
}
loading.value = false
}
`
`html
@click="sendPrompt"
v-text="'Send Prompt'"
/>
Generating, please wait ...
`
chat vs chatCompletion
The chat method allows the user to send a prompt to the OpenAI API and receive a response. You can use this endpoint to build conversational interfaces that can interact with users in a natural way. For example, you could use the chat method to build a chatbot that can answer customer service questions or provide information about a product or service.
The chatCompletion method is similar to the chat method, but it provides additional functionality for generating longer, more complex responses. Specifically, the chatCompletion method allows you to provide a conversation history as input, which the API can use to generate a response that is consistent with the context of the conversation. This makes it possible to build chatbots that can engage in longer, more natural conversations with users.
chatCompletionStream vs chatCompletion
The chatCompletionStream method returns the assistant response as a stream (token-by-token). This is useful when you want to build a ChatGPT-like UI where the answer is displayed while it's being generated, instead of waiting for the full message.
Module Options
| Name | Type | Default | Description |
| ------------- | --------- | -------- | ------------------------------------------------ |
| apiKey | String | xxxxxx | Your apiKey here goes here |
| isEnabled | Boolean | true | Enable or disable the module. True by default. |
Contributing
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement".
Don't forget to give the project a star! Thanks again!
1. Fork the Project
2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
3. Commit your Changes (git commit -m 'Add some AmazingFeature')
4. Push to the Branch (git push origin feature/AmazingFeature)
5. Open a Pull Request
License
Distributed under the MIT License. See LICENSE.txt for more information.
Contact
Oliver Trajceski - LinkedIn - oliver@akrinum.com
Project Link: https://nuxtchatgpt.com
Development
`bash
Install dependencies
npm install
Generate type stubs
npm run dev:prepare
Develop with the playground
npm run dev
Build the playground
npm run dev:build
Run ESLint
npm run lint
Run Vitest
npm run test
npm run test:watch
Release new version
npm run release
``