Memory-efficient JSON processing. Lazy Proxy expansion uses 70% less RAM than JSON.parse - plus 30-80% smaller payloads.
npm install tersejsonMemory-efficient JSON processing. Lazy Proxy expansion uses 70% less RAM than JSON.parse.
> TerseJSON does LESS work than JSON.parse, not more. The Proxy skips full deserialization - only accessed fields allocate memory. Plus 30-80% smaller payloads.


Your CMS API returns 21 fields per article. Your list view renders 3.
``javascript`
// Standard JSON.parse workflow:
const articles = await fetch('/api/articles').then(r => r.json());
// Result: 1000 objects x 21 fields = 21,000 properties allocated in memory
// You use: title, slug, excerpt (3 fields)
// Wasted: 18,000 properties that need garbage collection
Full deserialization wastes memory. Every field gets allocated whether you access it or not. Binary formats (Protobuf, MessagePack) have the same problem - they require complete deserialization.
TerseJSON's Proxy wraps compressed data and translates keys on-demand:
`javascript`
// TerseJSON workflow:
const articles = await terseFetch('/api/articles');
// Result: Compressed payload + Proxy wrapper
// Access: article.title → translates key, returns value
// Never accessed: 18 other fields stay compressed, never allocate
Memory Benchmarks (1000 records, 21 fields each):
| Fields Accessed | Normal JSON | TerseJSON Proxy | Memory Saved |
|-----------------|-------------|-----------------|--------------|
| 1 field | 6.35 MB | 4.40 MB | 31% |
| 3 fields (list view) | 3.07 MB | ~0 MB | ~100% |
| 6 fields (card view) | 3.07 MB | ~0 MB | ~100% |
| All 21 fields | 4.53 MB | 1.36 MB | 70% |
Run the benchmark yourself: node --expose-gc demo/memory-analysis.js
This is the most common misconception. Let's trace the actual operations:
Standard JSON.parse workflow:
1. Parse 890KB string → allocate 1000 objects x 21 fields = 21,000 properties
2. Access 3 fields per object
3. GC eventually collects 18,000 unused properties
TerseJSON workflow:
1. Parse 180KB string (smaller = faster) → allocate 1000 objects x 21 SHORT keys
2. Wrap in Proxy (O(1), ~0.1ms, no allocation)
3. Access 3 fields → 3,000 properties CREATED
4. 18,000 properties NEVER EXIST
The math:
- Parse time: Smaller string (180KB vs 890KB) = faster
- Allocations: 3,000 vs 21,000 = 86% fewer
- GC pressure: Only 3,000 objects to collect vs 21,000
- Proxy lookup: O(1) Map access, ~0.001ms per field
Result: LESS total work, not more. The Proxy doesn't add overhead - it skips work.
`bash`
npm install tersejson
`typescript
import express from 'express';
import { terse } from 'tersejson/express';
const app = express();
app.use(terse());
app.get('/api/users', (req, res) => {
// Just send data as normal - compression is automatic!
res.json(users);
});
`
`typescript
import { fetch } from 'tersejson/client';
// Use exactly like regular fetch
const users = await fetch('/api/users').then(r => r.json());
// Access properties normally - Proxy handles key translation
console.log(users[0].firstName); // Works transparently!
console.log(users[0].emailAddress); // Works transparently!
`
``
┌─────────────────────────────────────────────────────────────┐
│ SERVER │
│ 1. Your Express route calls res.json(data) │
│ 2. TerseJSON middleware intercepts │
│ 3. Compresses keys: { "a": "firstName", "b": "lastName" } │
│ 4. Sends smaller payload (180KB vs 890KB) │
└─────────────────────────────────────────────────────────────┘
↓ Network (smaller, faster)
┌─────────────────────────────────────────────────────────────┐
│ CLIENT │
│ 5. JSON.parse smaller string (faster) │
│ 6. Wrap in Proxy (O(1), near-zero cost) │
│ 7. Access data.firstName → Proxy translates to data.a │
│ 8. Unused fields never materialize in memory │
└─────────────────────────────────────────────────────────────┘
- CMS list views - title + slug + excerpt from 20+ field objects
- Dashboards - large datasets, aggregate calculations on subsets
- Mobile apps - memory constrained, infinite scroll
- E-commerce - product grids (name + price + image from 30+ field objects)
- Long-running SPAs - memory accumulation over hours (support tools, dashboards)
Memory efficiency is the headline. Smaller payloads are the bonus:
| Compression Method | Reduction | Use Case |
|--------------------|-----------|----------|
| TerseJSON alone | 30-39% | Sites without Gzip (68% of web) |
| Gzip alone | ~75% | Large payloads (>32KB) |
| TerseJSON + Gzip | ~85% | Recommended for production |
| TerseJSON + Brotli | ~93% | Maximum compression |
Network speed impact (1000-record payload):
| Network | Normal JSON | TerseJSON + Gzip | Saved |
|---------|-------------|------------------|-------|
| 4G (20 Mbps) | 200ms | 30ms | 170ms |
| 3G (2 Mbps) | 2,000ms | 300ms | 1,700ms |
| Slow 3G | 10,000ms | 1,500ms | 8,500ms |
"Just use gzip" misses two points:
1. 68% of websites don't have Gzip enabled (W3Techs). Proxy defaults are hostile - nginx, Traefik, Kubernetes all ship with compression off.
2. Gzip doesn't help memory. Even with perfect compression over the wire, JSON.parse still allocates every field. TerseJSON's Proxy keeps unused fields compressed in memory.
TerseJSON works at the application layer:
- No proxy config needed
- No DevOps tickets
- Stacks with gzip/brotli for maximum savings
- Plus memory benefits that gzip can't provide
| | TerseJSON | Protobuf/MessagePack |
|---|-----------|---------------------|
| Wire compression | 30-80% | 80-90% |
| Memory on partial access | Only accessed fields | Full deserialization required |
| Schema required | No | Yes |
| Human-readable | Yes (JSON in DevTools) | No (binary) |
| Migration effort | 2 minutes | Days/weeks |
| Debugging | Easy | Need special tools |
Binary formats win on wire size. TerseJSON wins on memory.
If you access 3 fields from a 21-field object:
- Protobuf: All 21 fields deserialized into memory
- TerseJSON: Only 3 fields materialize
NEW: Automatic memory-efficient queries with MongoDB native driver.
`typescript
import { terseMongo } from 'tersejson/mongodb';
import { MongoClient } from 'mongodb';
// Call once at app startup
await terseMongo();
// All queries automatically return Proxy-wrapped results
const client = new MongoClient(uri);
const users = await client.db('mydb').collection('users').find().toArray();
// Access properties normally - 70% less memory
console.log(users[0].firstName); // Works transparently!
`
What gets patched:
- find().toArray() - arrays of documentsfind().next()
- - single document iterationfor await (const doc of cursor)
- - async iterationfindOne()
- - single document queriesaggregate().toArray()
- - aggregation results
Options:
`typescript
await terseMongo({
minArrayLength: 5, // Only compress arrays with 5+ items
skipSingleDocs: true, // Don't wrap findOne results
minKeyLength: 4, // Only compress keys with 4+ chars
});
// Restore original behavior
import { unterse } from 'tersejson/mongodb';
await unterse();
`
Automatic memory-efficient queries with node-postgres (pg).
`typescript
import { tersePg } from 'tersejson/pg';
import { Client, Pool } from 'pg';
// Call once at app startup
await tersePg();
// All queries automatically return Proxy-wrapped results
const client = new Client();
await client.connect();
const { rows } = await client.query('SELECT * FROM users');
// Access properties normally - 70% less memory
console.log(rows[0].firstName); // Works transparently!
// Works with Pool too
const pool = new Pool();
const { rows: users } = await pool.query('SELECT * FROM users');
`
Options:
`typescript
await tersePg({
minArrayLength: 5, // Only compress arrays with 5+ items
skipSingleRows: true, // Don't wrap single-row results
minKeyLength: 4, // Only compress keys with 4+ chars
});
// Restore original behavior
import { untersePg } from 'tersejson/pg';
await untersePg();
`
Automatic memory-efficient queries with mysql2.
`typescript
import { terseMysql } from 'tersejson/mysql';
import mysql from 'mysql2/promise';
// Call once at app startup
await terseMysql();
// All queries automatically return Proxy-wrapped results
const connection = await mysql.createConnection({ host: 'localhost', user: 'root' });
const [rows] = await connection.query('SELECT * FROM users');
// Access properties normally - 70% less memory
console.log(rows[0].firstName); // Works transparently!
// Works with Pool too
const pool = mysql.createPool({ host: 'localhost', user: 'root' });
const [users] = await pool.query('SELECT * FROM users');
`
Options:
`typescript
await terseMysql({
minArrayLength: 5, // Only compress arrays with 5+ items
skipSingleRows: true, // Don't wrap single-row results
minKeyLength: 4, // Only compress keys with 4+ chars
});
// Restore original behavior
import { unterseMysql } from 'tersejson/mysql';
await unterseMysql();
`
Automatic memory-efficient queries with better-sqlite3.
`typescript
import { terseSqlite } from 'tersejson/sqlite';
import Database from 'better-sqlite3';
// Call once at app startup (synchronous)
terseSqlite();
// All queries automatically return Proxy-wrapped results
const db = new Database('my.db');
const rows = db.prepare('SELECT * FROM users').all();
// Access properties normally - 70% less memory
console.log(rows[0].firstName); // Works transparently!
// Single row queries too
const user = db.prepare('SELECT * FROM users WHERE id = ?').get(1);
console.log(user.email); // Works transparently!
`
Options:
`typescript
terseSqlite({
minArrayLength: 5, // Only compress arrays with 5+ items
skipSingleRows: true, // Don't wrap get() results
minKeyLength: 4, // Only compress keys with 4+ chars
});
// Restore original behavior
import { unterseSqlite } from 'tersejson/sqlite';
unterseSqlite();
`
Automatic memory-efficient queries with Sequelize ORM.
`typescript
import { terseSequelize } from 'tersejson/sequelize';
import { Sequelize, Model, DataTypes } from 'sequelize';
// Call once at app startup
await terseSequelize();
// Define your models as normal
class User extends Model {}
User.init({ firstName: DataTypes.STRING }, { sequelize });
// All queries automatically return Proxy-wrapped results
const users = await User.findAll();
// Access properties normally - 70% less memory
console.log(users[0].firstName); // Works transparently!
// Works with all Sequelize query methods
const user = await User.findOne({ where: { id: 1 } });
const { rows, count } = await User.findAndCountAll();
`
Options:
`typescript
await terseSequelize({
minArrayLength: 5, // Only compress arrays with 5+ items
skipSingleRows: true, // Don't wrap findOne/findByPk results
usePlainObjects: true, // Convert Model instances to plain objects (default)
});
// Restore original behavior
import { unterseSequelize } from 'tersejson/sequelize';
await unterseSequelize();
`
TerseJSON includes utilities for memory-efficient server-side data handling:
`typescript
import { TerseCache, compressStream } from 'tersejson/server-memory';
// Memory-efficient caching - stores compressed, expands on access
const cache = new TerseCache();
cache.set('users', largeUserArray);
const users = cache.get('users'); // Returns Proxy-wrapped data
// Streaming compression for database cursors
const cursor = db.collection('users').find().stream();
for await (const batch of compressStream(cursor, { batchSize: 100 })) {
// Process compressed batches without loading entire result set
}
// Inter-service communication - pass compressed data without intermediate expansion
import { createTerseServiceClient } from 'tersejson/server-memory';
const serviceB = createTerseServiceClient({ baseUrl: 'http://service-b' });
const data = await serviceB.get('/api/users'); // Returns Proxy-wrapped
`
`typescript
import { terse } from 'tersejson/express';
app.use(terse({
minArrayLength: 5, // Only compress arrays with 5+ items
minKeyLength: 4, // Only compress keys with 4+ characters
maxDepth: 5, // Max nesting depth to traverse
debug: true, // Log compression stats
}));
`
`typescript
import {
fetch, // Drop-in fetch replacement
createFetch, // Create configured fetch instance
expand, // Fully expand a terse payload
proxy, // Wrap payload with Proxy (default)
process, // Auto-detect and expand/proxy
} from 'tersejson/client';
// Drop-in fetch replacement
const data = await fetch('/api/users').then(r => r.json());
// Manual processing
import { process } from 'tersejson/client';
const response = await regularFetch('/api/users');
const data = process(await response.json());
`
`typescript
import {
compress, // Compress an array of objects
expand, // Expand a terse payload (full deserialization)
wrapWithProxy, // Wrap payload with Proxy (lazy expansion - recommended)
isTersePayload, // Check if data is a terse payload
} from 'tersejson';
// Manual compression
const compressed = compress(users, { minKeyLength: 3 });
// Two expansion strategies:
const expanded = expand(compressed); // Full expansion - all fields allocated
const proxied = wrapWithProxy(compressed); // Lazy expansion - only accessed fields
`
`typescript
import axios from 'axios';
import { createAxiosInterceptors } from 'tersejson/integrations';
const { request, response } = createAxiosInterceptors();
axios.interceptors.request.use(request);
axios.interceptors.response.use(response);
`
`typescript
import useSWR from 'swr';
import { createSWRFetcher } from 'tersejson/integrations';
const fetcher = createSWRFetcher();
function UserList() {
const { data } = useSWR('/api/users', fetcher);
return
$3
`typescript
import { useQuery } from '@tanstack/react-query';
import { createQueryFn } from 'tersejson/integrations';const queryFn = createQueryFn();
function UserList() {
const { data } = useQuery({
queryKey: ['users'],
queryFn: () => queryFn('/api/users')
});
return
{data?.[0].firstName};
}
`$3
`typescript
// Server
import { terseGraphQL } from 'tersejson/graphql';
app.use('/graphql', terseGraphQL(graphqlHTTP({ schema })));// Client
import { createTerseLink } from 'tersejson/graphql-client';
const client = new ApolloClient({
link: from([createTerseLink(), httpLink]),
cache: new InMemoryCache(),
});
`TypeScript Support
Full type definitions included:
`typescript
import type { TersePayload, Tersed } from 'tersejson';interface User {
firstName: string;
lastName: string;
}
const users: User[] = await fetch('/api/users').then(r => r.json());
users[0].firstName; // TypeScript knows this is a string
`FAQ
$3
No! The Proxy is transparent.
JSON.stringify(data) outputs original key names.$3
Fully supported. TerseJSON recursively compresses nested objects and arrays.
$3
Proxy mode adds <5% CPU overhead vs JSON.parse(). But with smaller payloads and fewer allocations, net total work is LESS. Memory is significantly lower.
$3
- wrapWithProxy() (default): Best for most cases. Lazy expansion, lower memory.
- expand(): When you need a plain object (serialization to storage, passing to libraries that don't support Proxy).
Browser Support
Works in all modern browsers supporting
Proxy` (ES6):Contributions welcome! Please read our contributing guidelines.
MIT - see LICENSE
---
tersejson.com | Memory-efficient JSON for high-volume APIs