Memory compression adapter for Supermemory and other AI memory systems
npm install compressmemory
!Work in Progress

> ⚠️ Work in Progress - This project is still a work in progress. I started it kind of randomly and honestly don't know if it will lead to anything concrete—just having fun and trying stuff out for now!
A memory compression adapter that sits on top of Supermemory, reducing storage size by 45-60% while maintaining retrieval quality.
CompressMemory.ai is a middleware layer that transparently compresses memory content before it reaches Supermemory, and decompresses it on retrieval. This reduces storage costs and improves scalability without affecting semantic search quality.
- Text Compression: zstd compression achieving ~45% size reduction
- Hash-based Deduplication: Detects and stores repeated content once
- Pass-through Embeddings: Supermemory generates embeddings on original content (no impact on search quality)
- Storage Metrics: Track compression savings per user/container
- Lazy Decompression: Only decompress memory items actually retrieved
- Extensible Architecture: Clean core/adapter split for future memory systems
```
Application
↓
CompressMemory.ai (compress → store → decompress)
↓
Supermemory (generates embeddings on original content)
↓
Vector Database
1. Classify content type
2. Compute content hash
3. Check for duplicates (deduplication)
4. Compress with zstd
5. Send original content to Supermemory (for embeddings)
6. Store compressed payload in metadata
7. Track storage metrics
1. Search Supermemory (uses embeddings on original content)
2. Receive results with compressed data
3. Lazy decompress only returned results
4. Return original content to application
`bash`
npm install compressmemory
`typescript
import { CompressMemory } from "compressmemory";
const memory = new CompressMemory({
apiKey: process.env.SUPERMEMORY_API_KEY,
containerTag: "user-123",
policy: "balanced",
});
// Write memory
await memory.write({
content: "Your memory content here...",
metadata: { source: "chat" },
});
// Read memory
const results = await memory.read({
query: "search query",
topK: 5,
});
// Get storage stats
const stats = await memory.getStorageStats();
console.log(Saved ${stats.savedBytes} bytes (${stats.savingsPercent}%));`
#### Constructor Options
`typescript`
interface CompressMemoryOptions {
apiKey: string; // Supermemory API key
containerTag: string; // User/org identifier
policy?: CompressionPolicyLevel; // 'minimal' | 'balanced' | 'aggressive'
}
#### Methods
write(options)
`typescript`
await memory.write({
content: string,
metadata?: Record
}) => { id: string }
read(options)
`typescript`
await memory.read({
query: string,
topK?: number
}) => Array<{ id: string; content: string; metadata?: Record
getStorageStats()
`typescript`
await memory.getStorageStats() => {
savedBytes: number
originalSize: number
compressedSize: number
savingsPercent: number
}
- minimal: Text normalization only (no compression, no dedup)
- balanced (recommended): zstd level 3 + deduplication (~45% savings)
- aggressive: zstd level 9 + deduplication (~60% savings, slower)
Run benchmarks to see performance with your data:
`bash`
npm run build
npm run benchmark
Example output:
| Strategy | Storage Reduction | Latency Overhead |
| ------------ | ----------------- | ---------------- |
| Raw | 0% | 0ms |
| zstd | ~45% | +2ms |
| zstd + dedup | ~60% | +3ms |
```
compressmemory/
├── core/ # Compression logic (zero Supermemory deps)
│ ├── classifier.ts # Identify memory type
│ ├── encode.ts # Apply compression strategy
│ ├── decode.ts # Reverse compression
│ ├── policy.ts # Choose strategy
│ ├── hash.ts # SHA-256 hashing
│ ├── metrics.ts # Storage savings tracking
│ └── strategies/
│ ├── text.ts # zstd compression
│ └── dedup.ts # Hash-based deduplication
├── adapters/ # Supermemory integration
│ └── supermemory/
│ ├── client.ts # Adapter client
│ ├── write.ts # Write with compression
│ └── read.ts # Read with decompression
├── benchmarks/ # Performance tests
├── examples/ # Usage examples
└── tests/ # Unit & integration tests
CompressMemory is designed to support other memory systems beyond Supermemory. The core compression logic has zero dependencies on Supermemory, making it easy to add adapters for:
- LangChain
- LlamaIndex
- MemGPT-style systems
- Custom memory implementations
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
MIT