Vercel AI Provider for providing memory to LLMs
npm install @mem0/vercel-ai-providerThe Mem0 AI SDK Provider is a community-maintained library developed by Mem0 to integrate with the Vercel AI SDK. This library brings enhanced AI interaction capabilities to your applications by introducing persistent memory functionality. With Mem0, language model conversations gain memory, enabling more contextualized and personalized responses based on past interactions.
Discover more of Mem0 on GitHub.
Explore the Mem0 Documentation to gain deeper control and flexibility in managing your memories.
For detailed information on using the Vercel AI SDK, refer to Vercelβs API Reference and Documentation.
- π§ Persistent memory storage for AI conversations
- π Seamless integration with Vercel AI SDK
- π Support for multiple LLM providers
- π Rich message format support
- β‘ Streaming capabilities
- π Context-aware responses
``bash`
npm install @mem0/vercel-ai-provider
1. Obtain your Mem0 API Key from the Mem0 dashboard.
2. Initialize the Mem0 Client:
`typescript
import { createMem0 } from "@mem0/vercel-ai-provider";
const mem0 = createMem0({
provider: "openai",
mem0ApiKey: "m0-xxx",
apiKey: "openai-api-key",
config: {
compatibility: "strict",
// Additional model-specific configuration options can be added here.
},
});
`
provider is used, so specifying it is optional:
`typescript
const mem0 = createMem0();
`
For better security, consider setting MEM0_API_KEY and OPENAI_API_KEY as environment variables.3. Add Memories to Enhance Context:
`typescript
import { LanguageModelV1Prompt } from "ai";
import { addMemories } from "@mem0/vercel-ai-provider";const messages: LanguageModelV1Prompt = [
{
role: "user",
content: [
{ type: "text", text: "I love red cars." },
{ type: "text", text: "I like Toyota Cars." },
{ type: "text", text: "I prefer SUVs." },
],
},
];
await addMemories(messages, { user_id: "borat" });
`These memories are now stored in your profile. You can view and manage them on the Mem0 Dashboard.
$3
For standalone features, such as
addMemories and retrieveMemories,
you must either set MEM0_API_KEY as an environment variable or pass it directly in the function call.Example:
`typescript
await addMemories(messages, { user_id: "borat", mem0ApiKey: "m0-xxx", org_id: "org_xx", project_id: "proj_xx" });
await retrieveMemories(prompt, { user_id: "borat", mem0ApiKey: "m0-xxx", org_id: "org_xx", project_id: "proj_xx" });
await getMemories(prompt, { user_id: "borat", mem0ApiKey: "m0-xxx", org_id: "org_xx", project_id: "proj_xx" });
`$3
retrieveMemories enriches the prompt with relevant memories from your profile, while getMemories returns the memories in array format which can be used for further processing.Usage Examples
$3
`typescript
import { generateText } from "ai";
import { createMem0 } from "@mem0/vercel-ai-provider";const mem0 = createMem0();
const { text } = await generateText({
model: mem0("gpt-4-turbo", {
user_id: "borat",
}),
prompt: "Suggest me a good car to buy!",
});
`$3
`typescript
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";
import { retrieveMemories } from "@mem0/vercel-ai-provider";const prompt = "Suggest me a good car to buy.";
const memories = await retrieveMemories(prompt, { user_id: "borat" });
const { text } = await generateText({
model: openai("gpt-4-turbo"),
prompt: prompt,
system: memories,
});
`$3
`typescript
import { generateText } from "ai";
import { createMem0 } from "@mem0/vercel-ai-provider";const mem0 = createMem0();
const { text } = await generateText({
model: mem0("gpt-4-turbo", {
user_id: "borat",
}),
messages: [
{
role: "user",
content: [
{ type: "text", text: "Suggest me a good car to buy." },
{ type: "text", text: "Why is it better than the other cars for me?" },
{ type: "text", text: "Give options for every price range." },
],
},
],
});
`$3
`typescript
import { generateText, LanguageModelV1Prompt } from "ai";
import { openai } from "@ai-sdk/openai";
import { retrieveMemories } from "@mem0/vercel-ai-provider";// New format using system parameter for memory context
const messages: LanguageModelV1Prompt = [
{
role: "user",
content: [
{ type: "text", text: "Suggest me a good car to buy." },
{ type: "text", text: "Why is it better than the other cars for me?" },
{ type: "text", text: "Give options for every price range." },
],
},
];
const memories = await retrieveMemories(messages, { user_id: "borat" });
const { text } = await generateText({
model: openai("gpt-4-turbo"),
messages: messages,
system: memories,
});
`$3
`typescript
import { streamText } from "ai";
import { createMem0 } from "@mem0/vercel-ai-provider";const mem0 = createMem0();
const { textStream } = await streamText({
model: mem0("gpt-4-turbo", {
user_id: "borat",
}),
prompt:
"Suggest me a good car to buy! Why is it better than the other cars for me? Give options for every price range.",
});
for await (const textPart of textStream) {
process.stdout.write(textPart);
}
`Core Functions
-
createMem0(): Initializes a new mem0 provider instance with optional configuration
- retrieveMemories(): Enriches prompts with relevant memories
- addMemories(): Add memories to your profile
- getMemories(): Get memories from your profile in array formatConfiguration Options
`typescript
const mem0 = createMem0({
config: {
...
// Additional model-specific configuration options can be added here.
},
});
`Best Practices
1. User Identification: Always provide a unique
user_id identifier for consistent memory retrieval
2. Context Management: Use appropriate context window sizes to balance performance and memory
3. Error Handling: Implement proper error handling for memory operations
4. Memory Cleanup: Regularly clean up unused memory contexts to optimize performanceWe also have support for
agent_id, app_id, and run_id. Refer Docs.Notes
- Requires proper API key configuration for underlying providers (e.g., OpenAI)
- Memory features depend on proper user identification via
user_id`