Multi-layer caching with LRU, Redis, and async data fetching support
npm install @md-oss/cacheMulti-layer caching with LRU, Redis, and async data fetching support built on cache-manager and Keyv.
- Multi-Store Support - Use LRU cache, Redis, or combine multiple stores
- Async Cache Manager - Automatic data fetching with cache-aside pattern
- Promise Caching - Cache in-flight promises to prevent duplicate requests
- Metadata Tracking - Built-in statistics for hits, misses, and performance
- TTL Support - Configurable time-to-live for all cache entries
- Event System - Listen to cache operations (set, delete, clear, refresh)
- Type-Safe - Full TypeScript support with generics
``bash`
pnpm add @md-oss/cache
`typescript
import { CacheManager, LRUCache } from '@md-oss/cache';
import Keyv from 'keyv';
// Create an in-memory LRU cache
const cache = CacheManager.fromStore
new LRUCache({
max: 1000, // Maximum 1000 items
ttl: 60000 // 60 seconds TTL
})
);
// Set a value
await cache.set('user:123', 'John Doe', 30000); // 30 second TTL
// Get a value
const user = await cache.get('user:123'); // 'John Doe'
// Delete a value
await cache.del('user:123');
// Clear all
await cache.clear();
`
Automatically fetch and cache data when not found:
`typescript
import { AsyncCacheManager } from '@md-oss/cache';
import Keyv from 'keyv';
const userCache = new AsyncCacheManager({
stores: [new Keyv()],
ttl: 60000, // 60 seconds
dataFunction: async (userId: string) => {
// This function runs only on cache miss
const user = await db.users.findOne({ id: userId });
return user;
}
});
// First call - fetches from database and caches
const user1 = await userCache.get('user:123');
// Second call - returns from cache
const user2 = await userCache.get('user:123');
console.log(userCache.metadata);
// { hits: 1, misses: 1, added: 1, deleted: 0, ... }
`
Prevent duplicate in-flight requests:
`typescript
import { PromiseCache } from '@md-oss/cache';
const apiCache = new PromiseCache
async function getUser(id: string) {
return apiCache.get(async () => {
// This function won't run if a request is already in-flight
return await fetch(/api/users/${id}).then(r => r.json());
});
}
// These 3 calls will only make 1 API request
const [user1, user2, user3] = await Promise.all([
getUser('123'),
getUser('123'),
getUser('123')
]);
`
Set the Redis URL in your environment:
`env`
REDIS_URL=redis://localhost:6379
`typescript
import { initializeRedis, getRedisClient } from '@md-oss/cache';
import Keyv from 'keyv';
import KeyvRedis from '@keyv/redis';
// Initialize Redis connection
await initializeRedis();
// Get Redis client
const redisClient = getRedisClient();
// Use Redis as cache store
const cache = CacheManager.fromStore
new Keyv({
store: new KeyvRedis(redisClient)
})
);
await cache.set('user:123', userData);
`
Layer multiple caches for optimal performance:
`typescript
import { CacheManager, LRUCache } from '@md-oss/cache';
import Keyv from 'keyv';
import KeyvRedis from '@keyv/redis';
// Create a multi-layer cache: LRU (L1) → Redis (L2)
const cache = new CacheManager
stores: [
// L1: Fast in-memory cache
new Keyv({
store: new LRUCache({ max: 500, ttl: 60000 })
}),
// L2: Shared Redis cache
new Keyv({
store: new KeyvRedis(redisClient)
})
],
ttl: 300000 // 5 minutes default TTL
});
// Get checks L1 first, then L2, populates higher levels on hit
const user = await cache.get('user:123');
`
Track cache performance and errors:
`typescript
import { AsyncCacheManager } from '@md-oss/cache';
const cache = new AsyncCacheManager({
stores: [new Keyv()],
ttl: 60000,
dataFunction: async (key: string) => {
const data = await fetchExpensiveData(key);
return data;
},
callbacks: {
onStart: (key) => {
console.log(Fetching data for ${key});Fetch took ${duration}ms for ${key}
},
onEnd: (key, duration) => {
console.log();Successfully cached ${key}
},
onSuccess: (key, value) => {
console.log();Error fetching ${key}:
},
onError: (key, error) => {
console.error(, error);
}
}
});
// Check async metadata
console.log(cache.metadata.async);
// { last: 123, total: 456, average: 152, longest: 234, shortest: 89 }
`
Listen to cache operations:
`typescript
import { CacheManager } from '@md-oss/cache';
const cache = CacheManager.fromStore
cache.on('set', ({ key, value, error }) => {
if (!error) {
console.log(Cached: ${key} = ${value});
}
});
cache.on('del', ({ key, error }) => {
if (!error) {
console.log(Deleted: ${key});
}
});
cache.on('clear', () => {
console.log('Cache cleared');
});
cache.on('refresh', ({ key, error }) => {
if (!error) {
console.log(Refreshed: ${key});`
}
});
`typescript
// Get multiple keys
const users = await cache.mget(['user:1', 'user:2', 'user:3']);
// Set multiple keys
await cache.mset([
{ key: 'user:1', value: user1, ttl: 60000 },
{ key: 'user:2', value: user2, ttl: 60000 },
{ key: 'user:3', value: user3, ttl: 60000 }
]);
// Delete multiple keys
await cache.mdel(['user:1', 'user:2', 'user:3']);
`
Wrap expensive operations with automatic caching:
`typescript`
const result = await cache.wrap(
'expensive-computation',
async () => {
// Only runs on cache miss
return await performExpensiveComputation();
},
60000 // TTL in milliseconds
);
`typescript
import { AsyncCacheManager } from '@md-oss/cache';
const apiCache = new AsyncCacheManager
stores: [new Keyv()],
ttl: 300000, // 5 minutes
dataFunction: async (endpoint: string) => {
const response = await fetch(endpoint);
return response.json();
}
});
app.get('/api/proxy/:path', async (req, res) => {
const data = await apiCache.get(req.params.path);
res.json(data);
});
`
`typescript
const userCache = new AsyncCacheManager
stores: [
new Keyv({ store: new LRUCache({ max: 1000, ttl: 60000 }) }),
new Keyv({ store: new KeyvRedis(redisClient) })
],
ttl: 600000, // 10 minutes
dataFunction: async (userId: string) => {
return await db.users.findUnique({ where: { id: userId } });
}
});
async function getUser(id: string) {
return userCache.get(id);
}
`
`typescript
const sessionCache = CacheManager.fromStore
new Keyv({ store: new KeyvRedis(redisClient) })
);
async function getSession(sessionId: string) {
return sessionCache.get(session:${sessionId});
}
async function createSession(sessionId: string, data: Session) {
await sessionCache.set(session:${sessionId}, data, 3600000); // 1 hour`
}
`typescript
const rateLimitCache = CacheManager.fromStore
new LRUCache({ max: 10000, ttl: 60000 })
);
async function checkRateLimit(userId: string): Promise
const key = rate:${userId};`
const count = (await rateLimitCache.get(key)) ?? 0;
if (count >= 100) {
return false; // Rate limit exceeded
}
await rateLimitCache.set(key, count + 1);
return true;
}
`typescript
const computeCache = new AsyncCacheManager
stores: [new Keyv()],
ttl: 3600000, // 1 hour
dataFunction: async (key: string) => {
const [start, end] = key.split(':');
return await calculateComplexMetrics(Number(start), Number(end));
}
});
async function getMetrics(startDate: Date, endDate: Date) {
const key = ${startDate.getTime()}:${endDate.getTime()};`
return computeCache.get(key);
}
Track cache performance:
`typescript
const cache = new AsyncCacheManager({ / ... / });
// Basic metadata
console.log(cache.metadata);
// {
// hits: 150,
// misses: 50,
// added: 50,
// deleted: 10,
// updated: 20,
// cleared: 0,
// errors: 0,
// async: {
// last: 123,
// total: 6150,
// average: 123,
// longest: 456,
// shortest: 45
// }
// }
// Calculate hit rate
const hitRate = cache.metadata.hits / (cache.metadata.hits + cache.metadata.misses);
console.log(Cache hit rate: ${(hitRate * 100).toFixed(2)}%);`
- LRU Cache: O(1) operations, best for hot data
- Redis: Network overhead, best for shared/persistent cache
- Multi-Store: Checks stores in order, populates higher levels on hit
- Promise Cache: Prevents thundering herd for in-flight requests
`typescript
// Use appropriate TTLs
const shortTTL = 60000; // 1 minute for volatile data
const mediumTTL = 600000; // 10 minutes for regular data
const longTTL = 3600000; // 1 hour for stable data
// Layer caches appropriately
const cache = new CacheManager({
stores: [
new Keyv({ store: new LRUCache({ max: 100, ttl: shortTTL }) }), // Hot cache
new Keyv({ store: new KeyvRedis(redis), ttl: longTTL }) // Warm cache
]
});
// Use promise caching for expensive operations
const promiseCache = new PromiseCache(5000);
const result = await promiseCache.get(() => expensiveOperation());
`
- get(key) - Get a value from cachemget(keys)
- - Get multiple valuesset(key, value, ttl?)
- - Set a valuemset(entries)
- - Set multiple valuesdel(key)
- - Delete a valuemdel(keys)
- - Delete multiple valuesclear()
- - Clear all cache entrieswrap(key, fn, ttl?)
- - Wrap function with cachingttl(key)
- - Get remaining TTL for a keykeys()
- - Get all cached keyson(event, listener)
- - Listen to cache eventsextend(options)
- - Create extended cache managerdisconnect()
- - Disconnect all stores
Extends CacheManager with:dataFunction
- - Automatic data fetching on cache misscallbacks
- - Lifecycle callbacks (onStart, onEnd, onSuccess, onError)
- Additional metadata tracking for async operations
- get(generator) - Get or generate cached promiseclear()
- - Clear cached promise
- initializeRedis() - Initialize Redis connectiongetRedisClient()
- - Get Redis client instanceredisSetKv(key, value, ttl)
- - Set key-value with TTL
`typescript`
import type {
AbstractCache,
CacheManagerMetadata,
AsyncCacheManagerMetadata,
WithCacheDetails,
SetCacheArguments,
LRUArgs
} from '@md-oss/cache';
`env``
REDIS_URL=redis://localhost:6379 # Required for Redis support