A production-grade RAG library with OpenAI embeddings and Pinecone vector storage
npm install context-windowA simple RAG (Retrieval-Augmented Generation) library that lets you ask questions about your documents using OpenAI and Pinecone.
``bash`
npm install context-window
1. Get API keys from OpenAI and Pinecone
2. Set environment variables:
`bash`
OPENAI_API_KEY=sk-...
PINECONE_API_KEY=...
PINECONE_INDEX=context-window
PINECONE_ENVIRONMENT=us-east-1
`typescript
import { createCtxWindow, getCtxWindow } from "context-window";
// Ingest documents
await createCtxWindow({
namespace: "my-docs",
data: ["./my-book.pdf"], // .txt, .md, .pdf supported
ai: { provider: "openai", model: "gpt-4o-mini" },
vectorStore: { provider: "pinecone" }
});
// Ask questions
const cw = getCtxWindow("my-docs");
const { text, sources } = await cw.ask("What is this document about?");
console.log(text);
console.log(sources);
`
createCtxWindow(options) - Ingest documents and create a context window
- namespace: Unique identifierdata
- : File paths or directories (.txt, .md, .pdf)ai
- : { provider: "openai", model?: "gpt-4o-mini" }vectorStore
- : { provider: "pinecone" }chunk
- : { size?: 1000, overlap?: 150 }limits
- : { topK?: 8, maxContextChars?: 8000, scoreThreshold?: 0 }
getCtxWindow(namespace) - Retrieve a context window
cw.ask(question) - Returns { text: string, sources: string[] }
- hasCtxWindow(namespace) - Check if existsdeleteCtxWindow(namespace)
- - Remove from registryclearCtxWindows()
- - Clear alllistCtxWindows()` - List all
-
MIT