A agent component for Convex.
npm install @convex-dev/agent
Convex provides powerful building blocks for building agentic AI applications,
leveraging Components and existing Convex features.
With Convex, you can separate your long-running agentic workflows from your UI,
without the user losing reactivity and interactivity.
``sh`
npm i @convex-dev/agent
AI Agents, built on Convex.
Check out the docs here.
The Agent component is a core building block for building AI agents. It manages
threads and messages, around which you Agents can cooperate in static or dynamic
workflows.
- Agents provide an abstraction
for using LLMs to represent units of use-case-specific prompting with
associated models, prompts,
Tool Calls, and behavior in relation
to other Agents, functions, APIs, and more.
- Threads persist
messages and can be shared by
multiple users and agents (including
human agents).
- Streaming text and objects using deltas over websockets so all clients stay in
sync efficiently, without http streaming. Enables streaming from async
functions.
- Conversation context is
automatically included in each LLM call, including built-in hybrid vector/text
search for messages in the thread and opt-in search for messages from other
threads (for the same specified user).
- RAG techniques are supported for prompt
augmentation from other sources, either up front in the prompt or as tool
calls. Integrates with the
RAG Component, or DIY.
- Workflows allow building
multi-step operations that can span agents, users, durably and reliably.
- Files are supported in thread history
with automatic saving to file storage
and ref-counting.
- Debugging is enabled by callbacks,
the agent playground where you
can inspect all metadata and iterate on prompts and context settings, and
inspection in the dashboard.
- Usage tracking is easy to set
up, enabling usage attribution per-provider, per-model, per-user, per-agent,
for billing & more.
- Rate limiting, powered by the
Rate Limiter Component,
helps control the rate at which users can interact with agents and keep you
from exceeding your LLM provider's limits.
Read the associated Stack post here.

Read the docs for more details.
Play with the example:
`sh``
git clone https://github.com/get-convex/agent.git
cd agent
npm run setup
npm run dev
Found a bug? Feature request?
File it here.
