70% less memory — Proxy does LESS work than JSON.parse

Memory-Efficient
JSON Processing

TerseJSON's Proxy skips full deserialization. Access 3 fields from a 21-field object — only 3 fields materialize in memory. Plus 30-80% smaller payloads.

Memory: 70% less RAMNetwork: 30-80% smallerOverhead: Near-zero
npm install tersejson
70%
Memory Saved
86%
Fewer Allocations
< 1ms
Overhead
0
Code Changes
Client-Side Optimization

Memory Efficiency with Lazy Proxy

Binary formats require full deserialization. TerseJSON's Proxy only expands keys on-demand - saving memory for the fields you never access.

Memory Usage: 1,000 Records x 21 Fields

Measured with --expose-gc, forcing garbage collection between tests

Fields AccessedNormal JSONTerseJSON ProxyMemory Saved
1 field6.35 MB4.40 MB31%
3 fields (list view)3.07 MB~0 MB~100%
6 fields (card view)3.07 MB~0 MB~100%
All 21 fields4.53 MB1.36 MB70%

Why ~0 MB / ~100% savings?

The Proxy is so lightweight that accessing partial fields triggers garbage collection of unused data. The original compressed payload stays in memory, keys are translated on-demand without creating intermediate objects.

Perfect For

CMS List Views

Title + slug + excerpt from 20+ field objects

Dashboards

Large datasets, aggregate views

Mobile Apps

Memory constrained, infinite scroll

E-commerce

Product listings from 30+ field objects

Protobuf requires full deserialization. TerseJSON doesn't.

Fetch everything, render what you need, pay only for what you access.

The Proxy Does Less Work, Not More

Critics assume TerseJSON adds overhead. Let's address the common misconceptions.

Myth

"Every memory operation takes processing time"

Reality

True — but TerseJSON does FEWER memory operations. JSON.parse creates objects for all 21 fields. Proxy only translates the 3 you access.

Myth

"Alias map is a memory allocation nightmare"

Reality

The key map is a single object with ~20 entries (~200 bytes). Compare that to 21,000 property allocations for 1000 objects x 21 fields.

Myth

"JSON.stringify & JSON.parse are heavy operations"

Reality

We use the SAME JSON.parse — but on a smaller string (180KB vs 890KB). Smaller string = faster parse. The Proxy wrapper is O(1).

Myth

"Adding more overhead for 15-25% savings"

Reality

This assumes we ADD overhead. We REDUCE it. Lazy expansion means work is deferred/skipped entirely for unused fields.

Tracing the Actual Operations

Standard JSON.parse

1

Parse 890KB string

2

Allocate 1000 objects x 21 fields = 21,000 properties

3

Access 3 fields per object

4

GC collects 18,000 unused properties

TerseJSON Proxy

1

Parse 180KB string (smaller = faster)

2

Wrap in Proxy (O(1), ~0.1ms)

3

Access 3 fields = 3,000 properties created

4

18,000 properties NEVER EXIST

86%
Fewer Allocations
80%
Smaller Parse
0
Unused Field GC
<5%
CPU Overhead

Run the benchmark yourself:

node --expose-gc demo/memory-analysis.js

See It In Action

Watch how TerseJSON transforms your API responses in real-time. Try different data types to see the compression in action.

Original Response454 B
[
  {
    "firstName": "John",
    "lastName": "Doe",
    "emailAddress": "john@example.com",
    "phoneNumber": "+1-555-0101",
    "createdAt": "2024-01-15",
    "updatedAt": "2024-03-20"
  },
  {
    "firstName": "Jane",
    "lastName": "Smith",
    "emailAddress": "jane@example.com",
    "phoneNumber": "+1-555-0102",
    "createdAt": "2024-02-20",
    "updatedAt": "2024-03-18"
  },
  {
    "firstName": "Bob",
    "lastName": "Wilson",
    "emailAddress": "bob@example.com",
    "phoneNumber": "+1-555-0103",
    "createdAt": "2024-03-01",
    "updatedAt": "2024-03-21"
  }
]
Compressed Response433 B
{
  "__terse__": true,
  "v": 1,
  "k": {
    "a": "firstName",
    "b": "lastName",
    "c": "emailAddress",
    "d": "phoneNumber",
    "e": "createdAt",
    "f": "updatedAt"
  },
  "d": [
    {
      "a": "John",
      "b": "Doe",
      "c": "john@example.com",
      "d": "+1-555-0101",
      "e": "2024-01-15",
      "f": "2024-03-20"
    },
    {
      "a": "Jane",
      "b": "Smith",
      "c": "jane@example.com",
      "d": "+1-555-0102",
      "e": "2024-02-20",
      "f": "2024-03-18"
    },
    {
      "a": "Bob",
      "b": "Wilson",
      "c": "bob@example.com",
      "d": "+1-555-0103",
      "e": "2024-03-01",
      "f": "2024-03-21"
    }
  ]
}
Saved 21 B
5% smaller
with just 3 items

Network Bonus: Smaller Payloads Too

Memory efficiency is the main feature — but you also get 30-80% smaller network payloads.

No gzip?

Save 70-80% instantly — easier to add than configuring compression middleware.

Already have gzip?

Stack TerseJSON on top for 15-25% additional savings.

Enterprise scale?

TerseJSON pays for itself in cloud egress costs alone.

ScenarioOriginalWith TerseJSONSavings
100 users, 10 fields45 KB12 KB73%
1000 products, 15 fields890 KB180 KB80%
10000 logs, 8 fields2.1 MB450 KB79%
Many servers don't have gzip enabled. Express apps, serverless functions (Lambda, Vercel, Cloudflare Workers), and internal APIs often skip compression. TerseJSON is usually easier to add than configuring compression middleware.

Test Your JSON

Paste your JSON data below to see exactly how much TerseJSON can compress it. Get instant verification that your data works perfectly.

Your JSON Input336 B
Compressed Output288 B
{
  "__terse__": true,
  "v": 1,
  "k": {
    "a": "userId",
    "b": "firstName",
    "c": "lastName",
    "d": "emailAddress",
    "e": "isActive",
    "f": "createdAt"
  },
  "d": [
    {
      "a": 1,
      "b": "John",
      "c": "Doe",
      "d": "john@example.com",
      "e": true,
      "f": "2024-01-15"
    },
    {
      "a": 2,
      "b": "Jane",
      "c": "Smith",
      "d": "jane@example.com",
      "e": true,
      "f": "2024-02-20"
    }
  ]
}
Verified Working
by TerseJSON
14.3%
smaller
336 B
Original Size
288 B
Compressed Size
48 B
Bytes Saved
6
Keys Compressed
Key Mapping
auserId
bfirstName
clastName
demailAddress
eisActive
fcreatedAt
At Scale Projection
46.88 KB
Saved per 1K requests
4.58 MB
Saved per 100K requests
$0.01
Monthly AWS savings (100K/day)

Why TerseJSON?

Built for production. Designed for developers.

Zero Code Changes

Drop-in middleware and client wrapper. Your existing code works unchanged.

Transparent Proxies

Client-side proxies let you access data with original keys. No expansion needed.

TypeScript Ready

Full TypeScript support with generics. Type safety throughout.

Built-in Analytics

Track compression stats, bandwidth savings, and per-endpoint performance.

REST & GraphQL

Works with Express, Apollo Client, Axios, React Query, SWR, and more.

Flexible Patterns

Choose from 5 key patterns or create custom generators. Deep nested support.

TerseJSON vs Protobuf

“Why not just use Protobuf?” — A common question with a nuanced answer.

FeatureTerseJSONProtobuf
Memory on partial access
Only accessed fields allocate
Full deserialization required
Format
JSON (text)
Binary
Schema
None needed
Required .proto files
Setup
app.use(terse())
Code gen, build pipeline
Human-readable
Yes
No (gibberish in DevTools)
Debugging
Easy
Need special tools
Wire compression
30-80%
80-90%+
Client changes
None (Proxy handles it)
Must use generated classes
Migration effort
2 minutes
Days/weeks

When to use Protobuf

  • You access ALL fields in EVERY object (rare)
  • Already invested in gRPC infrastructure
  • Have dedicated team for schema management
  • Wire size is the only metric that matters

When to use TerseJSON

  • Memory efficiency matters (most apps)
  • You access partial fields (list views, cards, dashboards)
  • Want human-readable debugging in DevTools
  • Need fast integration without build pipeline changes

The bottom line: Protobuf wins on wire compression, but requires full deserialization. TerseJSON wins on memory efficiency — only the fields you access get allocated. For most real-world apps, that's the bigger win.

Mobile-First

Built for the Mobile Web

Mobile devices now dominate web traffic. TerseJSON delivers the biggest impact where it matters most — on phones with limited bandwidth and processing power.

62%
of global web traffic is mobile
3G/4G
still dominates in emerging markets
80%
mobile traffic in India, Nigeria, Sudan

Source: Statcounter 2025, HTTP Archive

Why Mobile Users Need TerseJSON More

Desktop User
Connection:50-100 Mbps WiFi
100KB JSON:~8ms download
Impact:Marginal improvement
Mobile User (4G)
Connection:5-20 Mbps (variable)
100KB JSON:~80ms download
With TerseJSON:~24ms (70% faster)

Faster Load Times

Smaller payloads mean faster parsing and rendering on resource-constrained devices.

Works on Slow Networks

3G and spotty 4G connections benefit massively from 70% smaller responses.

Less Battery Drain

Less data to download and process means lower CPU usage and better battery life.

Better Conversion

Mobile bounce rates are 12% higher than desktop. Faster loads = more engagement.

By 2028, mobile will account for 70-80% of all internet traffic. Optimizing for mobile isn't optional anymore — it's where your users are.

The Gzip Reality Check

Gzip Is Harder Than You Think

Most developers assume gzip “just works.” The data tells a different story.

11%
of sites have zero compression
12-14%
of HTML/text responses are compressed
60%
of HTTP responses have no text compression

Source: W3Techs Jan 2026, HTTP Archive Web Almanac

The Proxy Problem

When you have a proxy in front of your Node.js server, gzip configuration gets complicated. The proxy is often managed by a different team — and it's frequently misconfigured or missed entirely.

NGINX

gzip_proxied defaults to off

NGINX does not compress proxied requests by default.

HTTP version mismatch

gzip_http_version defaults to 1.1, but proxy_http_version defaults to 1.0.

Docker image ships disabled

The official nginx image has gzip commented out: #gzip on;

Modern Proxies & PaaS

Traefik: Compress middleware off

Must explicitly add traefik.http.middlewares.compress=true labels to every service.

Dokploy/Coolify inherit Traefik

PaaS platforms using Traefik don't enable compression by default.

K8s ingress-nginx: use-gzip false

Kubernetes ingress-nginx ConfigMap has use-gzip: false by default.

The Layer Conflict

Modern stacks have multiple compression layers that can fight each other. “I enabled gzip but it's not working” is often because another layer is doing something different.

App Layer
Express/Node
compression middleware
might use Brotli
Proxy Layer
Traefik/nginx
tries to compress again
sees Content-Encoding set
Edge Layer
Cloudflare/CDN
may also compress
different algorithm?
Common symptoms:
  • • Double compression attempts (proxy sees already-compressed content)
  • Content-Encoding header conflicts between layers
  • • Proxy skips compression because app already set the header
  • • Different layers using gzip vs Brotli vs zstd

The “proper” nginx fix:

gzip on;
gzip_proxied any;
gzip_http_version 1.0;
gzip_types text/plain application/json
  application/javascript text/css;

Requires DevOps coordination, nginx access, and restart.

The TerseJSON fix:

import { terse } from 'tersejson/express'

app.use(terse())
Works instantly, no DevOps needed
Free and open source
Works regardless of proxy config

Why not just add it? With TerseJSON, you never have to wonder “is gzip actually configured and working?” It works at the application layer, ships with your code, doesn't depend on proxy config or DevOps tickets. If gzip is there, great — extra savings. If it's not, you're still covered.

TerseJSON works at the JSON structure layer — before any byte compression. It doesn't fight with gzip, Brotli, or zstd. Structural compression + byte compression = maximum savings.

New

GraphQL Support

TerseJSON now works with GraphQL. Compress arrays in your GraphQL responses with the same transparent proxy-based expansion as REST.

express-graphql

Drop-in wrapper for express-graphql that automatically compresses responses.

Apollo Client

Apollo Link that handles automatic decompression on the frontend.

How it works

  • Compresses arrays within GraphQL responses
  • Works with queries like users { firstName lastName }
  • Same transparent proxy-based expansion as REST APIs
  • No changes to your GraphQL schema or resolvers
tersejson/graphql
import { graphqlHTTP } from 'express-graphql'
import { terseGraphQL } from 'tersejson/graphql'

app.use('/graphql', terseGraphQL(graphqlHTTP({
  schema: mySchema,
  graphiql: true,
})))

Import from tersejson/graphql for the server and tersejson/graphql-client for Apollo Client.

New in v0.3.1

MongoDB Zero-Config Integration

One line of code. Every query returns memory-efficient Proxies. No changes to your existing MongoDB code.

Zero Config

One function call patches the MongoDB driver. No code changes to your queries.

Memory Efficient

Query results are Proxy-wrapped. Access 3 fields from 20? Only 3 allocate in memory.

Full Coverage

Works with find(), aggregate(), findOne(), and all cursor methods automatically.

Configurable

Control minimum array size, skip single docs, and all standard compression options.

Perfect for

  • Dashboard APIs returning large datasets
  • CMS backends with rich document schemas
  • Any Node.js app using MongoDB native driver
tersejson/mongodb
import { terseMongo } from 'tersejson/mongodb'

// One line - that's it!
await terseMongo()

// All queries now return memory-efficient Proxies
const users = await db.collection('users').find().toArray()
// users[0].name  // Only allocates 'name', not all 20 fields

Import from tersejson/mongodb. Requires MongoDB Node.js driver v5+.

AI & Agentic Workflows

Optimize APIs for AI Agents

JSON is JSON — whether it's going to a React frontend or an LLM. TerseJSON reduces token count, not just bandwidth.

Faster Processing

Smaller payloads mean faster LLM processing. Less data to parse, quicker responses.

Lower Token Costs

Fewer tokens = lower API costs. OpenAI and Anthropic charge per token — TerseJSON reduces your bill.

Cleaner Context

Less noise in the context window. Repetitive keys waste tokens that could be instructions.

More Room for Prompts

Smaller data payloads leave more context budget for your actual prompts and instructions.

Keep context to a minimum leaving room for other instructions... We have more success with many small requests over a few big requests.

— Feedback from developers building agentic workflows

RAG Pipelines

Smaller chunks, more context

Tool Calling

Faster API responses for agents

Data Analysis

Feed more data per request

Enterprise

Built for Scale

At enterprise scale, TerseJSON pays for itself in cloud egress costs alone.

TrafficSavings/requestDaily SavingsMonthly Savings
1M requests/day40 KB40 GB1.2 TB
10M requests/day40 KB400 GB12 TB
100M requests/day40 KB4 TB120 TB

Real Cost Savings

At $0.09/GB egress (AWS pricing), 10M requests/day saves approximately $1,000/month in bandwidth costs alone.

~$1,000/mo

Don't Rewrite. Just Compress.

Critics suggest “rewrite in Rust” or “migrate to Protobuf” — which proves the gap exists. Those aren't realistic options for most teams.

Rust Rewrite
$100K+
Months of work
Protobuf Migration
$6-8K
Per developer, weeks
TerseJSON
~$0
One afternoon

Your devs should be building new features, not rewriting working APIs. TerseJSON gives you memory + bandwidth savings without the migration cost.

Large Array Optimization

Large arrays (1000+ items) are common in dashboards, reports, and data exports. TerseJSON excels here.

Beats Gzip on Scale

Gzip's 32KB sliding window loses efficiency on large arrays. TerseJSON maintains consistent compression.

Reduced Server Load

Less data to serialize and transmit means reduced CPU load and fewer servers needed.

Faster Client Parsing

Faster JSON.parse() on the client means better UX on data-heavy dashboards and reports.

Calculate Your Savings

See how much bandwidth, time, and money you can save based on your API traffic.

10K requests
50 KB
Daily Bandwidth Saved
317.4 MB
Monthly Savings
9.30 GB
Monthly Cost Saved
$0.84
at $0.09/GB
Yearly Cost Saved
$10
AWS egress fees

Loading Time by Connection

3G
Before273ms/req
After96ms/req
Saved/day29.6 min
4G LTE
Before20ms/req
After7ms/req
Saved/day2.2 min
5G
Before4ms/req
After1ms/req
Saved/day26.6s
WiFi
Before8ms/req
After3ms/req
Saved/day53.2s

Easy Integration

Get started in minutes with our pre-built integrations for popular frameworks.

import { terseMongo } from 'tersejson/mongodb'
import { MongoClient } from 'mongodb'

// Call once at app startup
await terseMongo()

// All queries automatically return Proxy-wrapped results
const client = new MongoClient(uri)
const users = await client
  .db('mydb')
  .collection('users')
  .find()
  .toArray()

// Access properties normally - 70% less memory
console.log(users[0].firstName) // Works transparently!
Coming Soon

Chrome Extension

See TerseJSON compression in your DevTools. Watch payloads transform in real-time, inspect key mappings, and verify compression is working — all without leaving Chrome.

TerseJSON Chrome Extension showing compressed JSON being decoded in DevTools
Real-time inspection
Compression stats
DevTools integration

Frequently Asked Questions

Everything you need to know about TerseJSON and when to use it.