Official TypeScript/JavaScript SDK for the Hypersave API - Your AI-powered memory layer
npm install hypersaveOfficial TypeScript/JavaScript SDK for the Hypersave API - Your AI-powered memory layer.
Documentation | Platform | API Reference
``bash`
npm install hypersaveor
yarn add hypersaveor
pnpm add hypersave
`typescript
import { HypersaveClient } from 'hypersave';
const client = new HypersaveClient({
apiKey: 'your-api-key',
// baseUrl: 'https://api.hypersave.io', // Optional, this is the default
});
// Save content to your memory
const saved = await client.save({
content: 'Meeting notes: Discussed Q4 roadmap with the team...',
category: 'Work',
});
// Ask questions about your saved content
const answer = await client.ask('What was discussed in the Q4 meeting?');
console.log(answer.answer);
// Search your memories
const results = await client.search('roadmap');
console.log(results.results);
`
- Save: Store any content (text, URLs, documents) with AI-powered analysis
- Ask: Get verified answers from your personal knowledge base
- Search: Find relevant documents and facts using semantic search
- Query: Multi-strategy search with reminder support
- Profile: Build and query your user profile from extracted facts
- Graph: Explore your knowledge graph
| Operation | Latency |
|-----------|---------|
| save() | ~50ms (async) |ask()
| - first query | ~1.5s |ask()
| - cached | under 10ms |search()
| | ~500ms |
Responses are automatically cached for 5 minutes for faster repeated queries.
`typescript`
interface HypersaveConfig {
apiKey: string; // Required: Your API key
baseUrl?: string; // Optional: API base URL (default: https://api.hypersave.io)
timeout?: number; // Optional: Request timeout in ms (default: 30000)
userId?: string; // Optional: Default user ID for all requests
}
#### save(options) - Save content
`typescript
// Async save (default) - returns immediately
const result = await client.save({
content: 'https://example.com/article',
title: 'Interesting Article',
category: 'Research',
});
// Check status of async save
if (result.pendingId) {
const status = await client.getSaveStatus(result.pendingId);
console.log(status.status); // 'processing' | 'indexed' | 'complete' | 'error'
}
`
#### saveSync(options) - Save and wait for completion
`typescriptSaved! Extracted ${result.saved?.facts} facts
const result = await client.saveSync({
content: 'Important note to remember',
});
console.log();`
#### ask(query) - Ask a question
`typescriptConfidence: ${answer.confidence}
const answer = await client.ask('What are my favorite programming languages?');
console.log(answer.answer);
console.log();Used ${answer.context.memoriesUsed} memories
console.log();`
#### search(query, options) - Search documents and facts
`typescript
const results = await client.search('machine learning', {
limit: 20,
includeContext: true,
});
for (const result of results.results) {
console.log([${result.type}] ${result.content} (${result.relevance}));`
}
#### query(message, options) - Multi-strategy search
`typescript
const result = await client.query('coffee meeting tomorrow', {
limit: 30,
});
// Check for triggered reminders
if (result.reminders.length > 0) {
console.log('Reminder:', result.reminders[0].content);
}
console.log(Found ${result.stats.totalResults} results);`
#### getMemories(options) - List saved memories
`typescript${memories.total} documents, ${memories.facts} facts
const memories = await client.getMemories({ limit: 100 });
console.log();
for (const doc of memories.documents) {
console.log(- ${doc.title} (${doc.type}));`
}
#### getProfile() - Get user profile
`typescriptBuilt from ${profile.facts.length} facts
const profile = await client.getProfile();
console.log('Profile:', profile.profile);
console.log();`
#### getGraph() - Get knowledge graph
`typescript${graph.nodes.length} nodes, ${graph.edges.length} edges
const graph = await client.getGraph();
console.log();`
#### deleteMemory(id) - Delete a memory
`typescript`
await client.deleteMemory('document-id-123');
#### remind(options) - Create a reminder
`typescript`
const reminder = await client.remind({
content: 'Buy milk',
trigger: 'grocery store',
priority: 3,
});
#### getUsage() - Get API usage stats
`typescript${usage.usage.documentsIndexed} documents indexed
const usage = await client.getUsage();
console.log();`
The SDK provides typed errors for better error handling:
`typescript
import {
HypersaveClient,
HypersaveError,
AuthenticationError,
ValidationError,
RateLimitError,
NotFoundError,
isHypersaveError,
} from 'hypersave';
try {
const result = await client.ask('my question');
} catch (error) {
if (error instanceof AuthenticationError) {
console.error('Invalid API key');
} else if (error instanceof RateLimitError) {
console.error(Rate limited. Retry after ${error.retryAfter}s);API error (${error.statusCode}): ${error.message}
} else if (error instanceof ValidationError) {
console.error('Invalid request:', error.details);
} else if (error instanceof NotFoundError) {
console.error('Resource not found');
} else if (isHypersaveError(error)) {
console.error();`
} else {
throw error;
}
}
Full TypeScript support with exported types:
`typescript`
import type {
HypersaveConfig,
SaveOptions,
SaveResult,
AskResult,
SearchResult,
DocumentType,
CategoryType,
SectorType,
} from 'hypersave';
`typescript
const result = await client.saveSync({
content: 'https://arxiv.org/abs/2301.00001',
category: 'Research',
});
console.log(Saved: ${result.saved?.title});Extracted ${result.saved?.facts} facts
console.log();`
`typescript
async function chat(message: string) {
// Search relevant context
const context = await client.query(message, { limit: 5 });
// Format context for your LLM
const memories = context.results
.map(r => r.content)
.join('\n');
// Pass to your LLM with the context
const response = await yourLLM.chat({
system: Use this context from the user's memories:\n${memories},
message,
});
return response;
}
`
`typescript`
const results = await client.search('project deadlines', {
limit: 20,
includeContext: true,
});
Hypersave works as a memory layer for any LLM, including local open-source models:
`typescript
import { HypersaveClient } from 'hypersave';
import ollama from 'ollama';
const hypersave = new HypersaveClient({
apiKey: 'your-api-key',
baseUrl: 'https://api.hypersave.io'
});
async function chatWithMemory(userMessage: string) {
// Get memory-augmented answer from Hypersave
const memoryResponse = await hypersave.ask(userMessage);
console.log(Found ${memoryResponse.context?.memoriesUsed} memories);
// Enhance with local LLM for richer response
const response = await ollama.chat({
model: 'gpt-oss:20b', // or llama3.1, mistral, etc.
messages: [
{
role: 'system',
content: User info from memory: "${memoryResponse.answer}". Use this to personalize.
},
{ role: 'user', content: userMessage }
]
});
return response.message.content;
}
// Save facts to Hypersave
await hypersave.save({ content: 'I work as a software engineer at Google', type: 'text' });
await hypersave.save({ content: 'My dog Max loves to play fetch', type: 'text' });
// Query with memory - LLM now knows your personal context
const answer = await chatWithMemory('What is my job?');
// Output: "You work as a software engineer at Google"
``
Validated with: GPT-OSS 20B, Llama 3.1, Qwen 2.5, Gemma 2, and other Ollama-compatible models.
MIT