Private, local RAGs. Supercharge LLMs with your own knowledge base.
npm install react-native-ragPrivate, local RAGs. Supercharge LLMs with your own knowledge base.
- :rocket: Features
- :earth_africa: Real-World Example
- :package: Installation
- :iphone: Quickstart - Example App
- :books: Usage
- Using the useRAG Hook
- Using the RAG Class
- Using RAG Components Separately
- :jigsaw: Using Custom Components
- :electric_plug: Plugins
- :handshake: Contributing
- :page_facing_up: License
* Modular: Use only the components you need. Choose from LLM, Embeddings, VectorStore, and TextSplitter.
* Extensible: Create your own components by implementing the LLM, Embeddings, VectorStore, and TextSplitter interfaces.
* Multiple Integration Options: Whether you prefer a simple hook (useRAG), a powerful class (RAG), or direct component interaction, the library adapts to your needs.
* On-device Inference: Powered by @react-native-rag/executorch, allowing for private and efficient model execution directly on the user's device.
* Vector Store Persistence: Includes support for SQLite with @react-native-rag/op-sqlite to save and manage vector stores locally.
* Semantic Search Ready: Easily implement powerful semantic search in your app by using the VectorStore and Embeddings components directly.
React Native RAG is powering Private Mind, a privacy-first mobile AI app available on App Store and Google Play.
``sh`
npm install react-native-rag
You will also need an embeddings model and a large language model. We recommend using @react-native-rag/executorch for on-device inference. To use it, install the following packages:
`sh`
npm install @react-native-rag/executorch react-native-executorch
For persisting vector stores, you can use @react-native-rag/op-sqlite:
For a complete example app that demonstrates how to use the library, check out the example app.
We offer three ways to integrate RAG, depending on your needs.
The easiest way to get started. Good for simple use cases where you want to quickly set up RAG.
`tsx
import React from 'react';
import { Text } from 'react-native';
import { useRAG, MemoryVectorStore } from 'react-native-rag';
import {
ALL_MINILM_L6_V2,
ALL_MINILM_L6_V2_TOKENIZER,
LLAMA3_2_1B_QLORA,
LLAMA3_2_1B_TOKENIZER,
LLAMA3_2_TOKENIZER_CONFIG,
} from 'react-native-executorch';
import {
ExecuTorchEmbeddings,
ExecuTorchLLM,
} from '@react-native-rag/executorch';
const vectorStore = new MemoryVectorStore({
embeddings: new ExecuTorchEmbeddings({
modelSource: ALL_MINILM_L6_V2,
tokenizerSource: ALL_MINILM_L6_V2_TOKENIZER,
}),
});
const llm = new ExecuTorchLLM({
modelSource: LLAMA3_2_1B_QLORA,
tokenizerSource: LLAMA3_2_1B_TOKENIZER,
tokenizerConfigSource: LLAMA3_2_TOKENIZER_CONFIG,
});
const App = () => {
const rag = useRAG({ vectorStore, llm });
return
};
`
For more control over components and configuration.
`tsx
import React, { useEffect, useState } from 'react';
import { Text } from 'react-native';
import { RAG, MemoryVectorStore } from 'react-native-rag';
import {
ExecuTorchEmbeddings,
ExecuTorchLLM,
} from '@react-native-rag/executorch';
import {
ALL_MINILM_L6_V2,
ALL_MINILM_L6_V2_TOKENIZER,
LLAMA3_2_1B_QLORA,
LLAMA3_2_1B_TOKENIZER,
LLAMA3_2_TOKENIZER_CONFIG,
} from 'react-native-executorch';
const App = () => {
const [rag, setRag] = useState
const [response, setResponse] = useState
useEffect(() => {
const initializeRAG = async () => {
const embeddings = new ExecuTorchEmbeddings({
modelSource: ALL_MINILM_L6_V2,
tokenizerSource: ALL_MINILM_L6_V2_TOKENIZER,
});
const llm = new ExecuTorchLLM({
modelSource: LLAMA3_2_1B_QLORA,
tokenizerSource: LLAMA3_2_1B_TOKENIZER,
tokenizerConfigSource: LLAMA3_2_TOKENIZER_CONFIG,
responseCallback: setResponse,
});
const vectorStore = new MemoryVectorStore({ embeddings });
const ragInstance = new RAG({ llm, vectorStore });
await ragInstance.load();
setRag(ragInstance);
};
initializeRAG();
}, []);
return
};
`
For advanced use cases requiring fine-grained control.
This is the recommended way if you want to implement semantic search in your app - use the VectorStore and Embeddings classes directly.
`tsx
import React, { useEffect, useState } from 'react';
import { Text } from 'react-native';
import { MemoryVectorStore } from 'react-native-rag';
import {
ExecuTorchEmbeddings,
ExecuTorchLLM,
} from '@react-native-rag/executorch';
import {
ALL_MINILM_L6_V2,
ALL_MINILM_L6_V2_TOKENIZER,
LLAMA3_2_1B_QLORA,
LLAMA3_2_1B_TOKENIZER,
LLAMA3_2_TOKENIZER_CONFIG,
} from 'react-native-executorch';
const App = () => {
const [embeddings, setEmbeddings] = useState
const [llm, setLLM] = useState
const [vectorStore, setVectorStore] = useState
const [response, setResponse] = useState
useEffect(() => {
const initialize = async () => {
// Instantiate and load the Embeddings Model
// NOTE: Calling load on VectorStore will automatically load the embeddings model
// so loading the embeddings model separately is not necessary in this case.
const embeddings = await new ExecuTorchEmbeddings({
modelSource: ALL_MINILM_L6_V2,
tokenizerSource: ALL_MINILM_L6_V2_TOKENIZER,
}).load();
// Instantiate and load the Large Language Model
const llm = await new ExecuTorchLLM({
modelSource: LLAMA3_2_1B_QLORA,
tokenizerSource: LLAMA3_2_1B_TOKENIZER,
tokenizerConfigSource: LLAMA3_2_TOKENIZER_CONFIG,
responseCallback: setResponse,
}).load();
// Instantiate and initialize the Vector Store
const vectorStore = await new MemoryVectorStore({ embeddings }).load();
setEmbeddings(embeddings);
setLLM(llm);
setVectorStore(vectorStore);
};
initialize();
}, []);
return
};
`
Bring your own components by creating classes that implement the LLM, Embeddings, VectorStore and TextSplitter interfaces. This allows you to use any model or service that fits your needs.
* @react-native-rag/executorch: On-device inference with react-native-executorch.@react-native-rag/op-sqlite`: Persisting vector stores using SQLite.
*
Contributions are welcome! See the contributing guide to learn about the development workflow.
MIT
Since 2012 Software Mansion is a software agency with experience in building web and mobile apps. We are Core React Native Contributors and experts in dealing with all kinds of React Native issues. We can help you build your next dream product – Hire us.
