Back to home
Performance Analysis

TerseJSON Performance Benchmarks

Memory efficiency through lazy Proxy expansion — plus network savings as a bonus

Benchmark report • January 2026

Executive Summary

TerseJSON's Proxy delivers 70% memory savings through lazy expansion — only accessed fields materialize in memory. Combined with 86% fewer allocations, this translates to faster GC, lower memory pressure, and better performance on memory-constrained devices. As a bonus, you also get 30-80% smaller network payloads.

70%
Memory Saved
86%
Fewer Allocations
<5%
CPU Overhead
30-80%
Smaller Payloads
0
Code Changes

1Memory Efficiency: The Primary Advantage

Why Memory Matters More Than Wire Size

Network compression (gzip/Brotli) only helps data in transit. Once the payload arrives, it's fully decompressed and parsed into memory. TerseJSON's Proxy solves a different problem: it prevents unused fields from ever allocating in the first place.

Key insight: Binary formats like Protobuf and MessagePack require full deserialization. Every field allocates memory whether you access it or not. TerseJSON's lazy Proxy only expands keys on-demand.

Memory Benchmarks: 1,000 Records x 21 Fields

Fields AccessedNormal JSONTerseJSON ProxyMemory Saved
1 field6.35 MB4.40 MB31%
3 fields (list view)3.07 MB~0 MB~100%
6 fields (card view)3.07 MB~0 MB~100%
All 21 fields4.53 MB1.36 MB70%

Why ~0 MB / ~100% savings? The Proxy is so lightweight that accessing partial fields triggers garbage collection of unused data. The original compressed payload stays in memory, keys are translated on-demand without creating intermediate objects.

Memory Savings by Fields Accessed

2The Proxy Does Less Work, Not More

A common misconception: "Adding a Proxy layer must add overhead." In reality, the Proxy does FEWER operations than standard JSON.parse because it parses a smaller string and defers expansion.

Standard JSON.parse

1

Parse 890KB string

2

Allocate 1000 objects x 21 fields = 21,000 properties

3

Access 3 fields per object

4

GC collects 18,000 unused properties

TerseJSON Proxy

1

Parse 180KB string (smaller = faster)

2

Wrap in Proxy (O(1), ~0.1ms)

3

Access 3 fields = 3,000 properties created

4

18,000 properties NEVER EXIST

CPU Benchmark Results

MetricStandard JSONTerseJSON ProxyImprovement
String to parse890 KB180 KB80% smaller
Property allocations21,0003,00086% fewer
GC pressure18,000 unused0 unused100% reduction
CPU overheadbaseline<5%Near-zero
CPU and Allocation Comparison

3TerseJSON vs Binary Formats

Protobuf and MessagePack are often suggested as alternatives. But they have a fundamental limitation: full deserialization is required. Every field must be parsed and allocated, even if you only need 3 fields from a 21-field object.

FeatureTerseJSONProtobufMessagePack
Partial field accessOnly accessed fields allocateFull deserializationFull deserialization
Memory (3/21 fields)~0 MB~4 MB~4 MB
Wire compression30-80%80-90%70-80%
Schema requiredNoYes (.proto files)No
Human-readableYes (JSON)No (binary)No (binary)
Migration effort5 minutesDays/weeksHours

The bottom line: Protobuf wins on wire compression, but requires full deserialization. TerseJSON wins on memory efficiency — only the fields you access get allocated. For most real-world apps, that's the bigger win.

TerseJSON vs Binary Formats

4Real-World Use Cases

TerseJSON shines when you fetch objects with many fields but only render a subset. This is extremely common:

CMS List Views

Fetch 1000 articles with 21 fields, render title + slug + excerpt (3 fields)

~100%memory saved

E-commerce Product Lists

Fetch products with 30+ fields, show name + price + image (3 fields)

90%memory saved

Dashboard Aggregates

Fetch user records for charts, aggregate 2-3 fields from 15+ field objects

80%memory saved

Mobile Infinite Scroll

Memory-constrained devices loading paginated data continuously

70%memory saved
// CMS fetches 1000 articles with 21 fields each
const articles = await terseFetch('/api/articles');

// But list view only needs 3 fields
articles.map(a => ({
  title: a.title,
  slug: a.slug,
  excerpt: a.excerpt
}));

// Result: Only 3 keys translated per object
// The other 18 fields stay compressed in memory
// Memory saved: ~100%
Real-World Use Cases

5Network Bonus: Smaller Payloads Too

Memory efficiency is the primary benefit, but you also get significant network savings. TerseJSON compresses repetitive JSON keys, reducing payload size by 30-80%.

Bandwidth Savings by Endpoint Type

Endpoint TypeCompression RateWhy
Products API38.5-38.6%Many repeated keys (name, price, category, description, etc.)
Users API18-33%Nested objects (address, metadata) with repeated subkeys
Logs API26.3-26.4%Consistent structure but shorter key names

Stacking with Gzip/Brotli

MethodReductionBest For
TerseJSON alone~31%Quick wins, no server config needed
TerseJSON + Gzip~85%Production with nginx/CDN
TerseJSON + Brotli~93%Maximum compression
Network Savings

6Mobile Performance Impact

Mobile devices are memory-constrained and often on slower networks. TerseJSON helps on both fronts: less memory allocation AND smaller payloads.

NetworkNormal JSONTerseJSONTime Saved
4G (20 Mbps)200ms30ms170ms (85%)
3G (2 Mbps)2,000ms300ms1,700ms (85%)
Slow 3G (400 Kbps)10,000ms1,500ms8,500ms (85%)

User perception: 10 seconds = "Is this broken?" → User leaves. 1.5 seconds = "That was quick!" → User stays.Every 100ms of latency costs 1% in conversions (Amazon/Google studies)

Mobile Performance

7Enterprise Cost Savings

At enterprise scale, memory + bandwidth savings compound. Less memory per request = more requests per server. Smaller payloads = lower egress costs.

ScaleDaily TrafficMonthly Bandwidth SavedCost Reduction*
Startup1M requests9.3 GB$0.84
Growth10M requests93 GB$8.37
Scale100M requests930 GB$83.70
Enterprise1B requests9.3 TB$837.00

*At $0.09/GB (AWS CloudFront pricing). Memory savings translate to reduced server instances.

Enterprise Cost Savings

8Integration: 5 Minutes to Memory Savings

Server Setup (2 lines)

import { terse } from 'tersejson/express';
app.use(terse());

Client Setup (1 line change)

// Before
const data = await fetch('/api/users').then(r => r.json());

// After
import { createFetch } from 'tersejson/client';
const terseFetch = createFetch();
const data = await terseFetch('/api/users').then(r => r.json());
// data works exactly the same — Proxy handles expansion transparently
Integration

9Run the Benchmarks Yourself

Don't take our word for it. Run the benchmarks on your own machine:

# Clone the repo
git clone https://github.com/timclausendev-web/tersejson
cd tersejson/demo

# Memory benchmark (requires --expose-gc)
node --expose-gc memory-analysis.js

# CPU benchmark
node cpu-benchmark.js
Benchmark Scripts

10Summary: Memory-First JSON Processing

"TerseJSON: 70% less memory, 86% fewer allocations"

Lazy Proxy expansion
Only accessed fields allocate
Near-zero CPU overhead
30-80% smaller payloads (bonus)

Primary: Memory Efficiency

70% memory savings with lazy expansion
86% fewer allocations = less GC pressure
Unused fields never exist unlike Protobuf

Secondary: Network Savings

30-80% smaller payloads on the wire
Stacks with gzip/Brotli for 93% total
Faster mobile loads on 3G/4G
TerseJSON Summary

Ready for memory-efficient JSON?

Get started with TerseJSON in under 5 minutes.