react-server675fbba4
react-servertreemaindocssrcpagesen(pages)advancedbundler-agnostic-rsc-serialization.mdx
docs/src/pages/en/(pages)/advanced/bundler-agnostic-rsc-serialization.mdxmdx45.5 KiB9e780444

title: Bundler-Agnostic RSC Serialization date: 2026-04-13 author: Viktor Lázár github: lazarv category: Advanced order: 1

import Link from "../../../../components/Link.jsx"; import Subtitle from "../../../../components/Subtitle.jsx";

Bundler-Agnostic RSC Serialization

<Subtitle>A Standalone Flight Protocol Implementation Without Bundler Coupling, Environment Restrictions, or React Imports</Subtitle>

A technical deep-dive into @lazarv/rsc — a from-scratch implementation of React's Flight protocol that removes the three coupling assumptions present in React's official serializer: dependency on a specific bundler, dependency on specific runtime environments, and dependency on the react-server Node.js export condition. The result is an RSC serializer that runs identically in Node.js, Deno, Bun, Cloudflare Workers, the browser, and any environment that provides Web Platform APIs.

<Link name="abstract"> ## Abstract </Link>

React Server Components (RSC) rely on the Flight protocol — a line-delimited streaming format that serializes React element trees, data structures, and client/server reference metadata across runtime boundaries. The official implementation, react-server-dom-webpack, is tightly coupled to three infrastructure assumptions:

  1. Bundler coupling. The serializer depends on Webpack manifests for client and server reference resolution. Alternative bundlers (Vite, Rollup, esbuild, Rspack) require adapter packages or compatibility shims.
  2. Environment coupling. The server entry point uses Node.js-specific APIs (stream.Readable, Buffer) and the client entry point assumes a browser context. Running the same code in Deno, Bun, or Cloudflare Workers requires separate builds or polyfills.
  3. Export condition coupling. React's internal Flight server code is gated behind the react-server Node.js export condition. Environments that do not support or configure this condition — worker threads, custom runtimes, embedded engines — cannot load the serializer without bundling a condition based version of the serializer and React.

This paper presents @lazarv/rsc, a standalone implementation of the Flight protocol that removes all three couplings. It achieves full wire-format parity with react-server-dom-webpack while introducing several architectural innovations: abstract module resolver/loader interfaces, Web Platform API-only I/O, Symbol.for()-based React integration without direct imports, reference-counting for data object inline/outline decisions, microtask-coalesced chunk flushing, a synchronous serialization mode, and a zero-copy element tuple scanner on the deserialization path.

Across 13 benchmark scenarios, @lazarv/rsc outperforms react-server-dom-webpack by 1.1×–6.5× on serialization and 1.0×–11.2× on deserialization, with roundtrip improvements of 1.0×–6.1×.

<Link name="the-problem"> ## The Problem: Three Layers of Coupling </Link>

Bundler Coupling

react-server-dom-webpack requires a Webpack plugin that generates a client and a server manifest — JSON mappings from module specifiers to their bundled output paths. The serializer reads this manifest at runtime to resolve "use client" and "use server" references into chunk metadata that the client and the server can use to load the correct module.

// react-server-dom-webpack/server — requires Webpack manifest
import { renderToReadableStream } from "react-server-dom-webpack/server";

// The manifest is generated by the Webpack plugin
const manifest = require("./react-client-manifest.json");
const stream = renderToReadableStream(<App />, manifest);

For non-Webpack bundlers, the React team provides react-server-dom-esm (incomplete), and the community has built various adapter shims (react-server-dom-vite, etc.). Each adapter must reverse-engineer the manifest format and provide its own Webpack plugin equivalent. This creates a fragmented ecosystem where every bundler needs custom integration code.

Environment Coupling

The official package ships four entry points gated by environment:

Entry Platform APIs Used
server.node Node.js stream.Readable, stream.Writable
server.edge Edge runtimes ReadableStream
client.browser Browser ReadableStream, fetch
client.node Node.js SSR stream.Readable

This four-way split creates conditional import complexity. A framework that targets multiple environments (SSR + edge + browser) must dynamically select the correct entry point, handle the API surface differences, and test each combination separately.

Export Condition Coupling

React's Flight server code lives under the react-server Node.js export condition:

{
  "exports": {
    ".": {
      "react-server": "./server.react-server.js",
      "default": "./server.js"
    }
  }
}

This condition must be configured in the bundler or runtime (--conditions=react-server in Node.js, resolve.conditions in Webpack/Vite). Environments that do not support export conditions — or that have not been configured with this specific condition — cannot import the Flight server at all. This is the most insidious coupling because it is invisible: the import succeeds but loads the wrong entry point, producing cryptic runtime errors.

For a project like @lazarv/react-server that uses the Flight protocol in diverse contexts — worker threads for SSR, cache providers for snapshot storage, logger proxies for structured cross-environment logging — the export condition restriction is a fundamental architectural barrier.

<Link name="design-goals"> ## Design Goals </Link>

@lazarv/rsc was designed with the following invariants:

  1. Full Flight protocol parity. Every type supported by react-server-dom-webpack — elements, fragments, Suspense, lazy, memo, forwardRef, context, Activity, ViewTransition, Promises, Map, Set, Date, BigInt, RegExp, Symbol, URL, URLSearchParams, FormData, TypedArrays, ArrayBuffer, DataView, Blob, ReadableStream, async iterables, client/server references, bound actions, temporary references, error digest propagation — must serialize and deserialize identically.
  2. Bundler-agnostic. No Webpack plugin, no Vite plugin, no bundler manifest. Module resolution is an abstract interface that the consumer provides.
  3. Environment-agnostic. A single code path for all environments. Built exclusively on Web Platform APIs: ReadableStream, TextEncoder, TextDecoder, FormData, Blob, URL. No stream.Readable, no Buffer, no AsyncLocalStorage.
  4. No react-server condition. The serializer must work in any environment without requiring special export condition configuration.
  5. No direct React imports. The package must not import from react at any level. React integration happens through Symbol.for() and an optional React instance passed at call time.
  6. Performance parity or better. The implementation must be at least as fast as the official package across representative workloads.
  7. Synchronous mode. For use cases that do not involve Promises or streaming (cache snapshots, logger payloads), a fully synchronous serialize/deserialize path must be available.
<Link name="architecture"> ## Architecture </Link>

The package consists of two entry points and four source files:

Entry Source Role
@lazarv/rsc/server server/index.mjsserver/shared.mjs Serialization: renderToReadableStream, syncToBuffer, prerender, decodeReply, reference registration
@lazarv/rsc/client client/index.mjsclient/shared.mjs Deserialization: createFromReadableStream, createFromFetch, syncFromBuffer, encodeReply, server reference proxies

The index.mjs files are re-export barrels. All logic lives in the shared.mjs files — approximately 3,300 lines for the server and 3,500 lines for the client. There are no platform-conditional branches, no dynamic require() calls, no environment detection beyond a dev-mode flag.

The React Decoupling Strategy

The most fundamental design decision is how to interact with React without importing it.

React's Flight protocol operates on React element types ($$typeof symbols), internal data structures (lazy payloads, context objects), and — for client component rendering — React's internal hooks dispatcher. The official react-server-dom-webpack imports these directly from react, which creates the react-server condition dependency.

@lazarv/rsc uses three strategies to avoid this:

Strategy 1: Symbol.for() for type detection.

React element types are global symbols registered via Symbol.for(). Any code — regardless of which React copy is loaded — can detect them:

const REACT_ELEMENT_TYPE = Symbol.for("react.element");
const REACT_TRANSITIONAL_ELEMENT_TYPE = Symbol.for("react.transitional.element");
const REACT_FRAGMENT_TYPE = Symbol.for("react.fragment");
const REACT_SUSPENSE_TYPE = Symbol.for("react.suspense");
const REACT_CLIENT_REFERENCE = Symbol.for("react.client.reference");
const REACT_SERVER_REFERENCE = Symbol.for("react.server.reference");
// ... 15+ additional type symbols

Because Symbol.for() returns the same symbol across all realms (including worker threads and iframes), this approach is immune to multiple-React-copy issues that plague instanceof checks.

Strategy 2: Structural duck-typing for elements.

Instead of calling React.createElement(), the serializer inspects objects structurally:

function isReactElement(value) {
  return (
    value !== null &&
    typeof value === "object" &&
    (value.$$typeof === REACT_ELEMENT_TYPE ||
      value.$$typeof === REACT_TRANSITIONAL_ELEMENT_TYPE)
  );
}

This works with elements created by any React version (18, 19, experimental) and any JSX transform (classic, automatic, manual createElement calls).

Strategy 3: Optional React instance for hooks.

Client components that use hooks (use(), useId(), useMemo(), useCallback(), useEffect()) require React's internal dispatcher. Rather than importing React, @lazarv/rsc accepts an optional react instance in the options:

const stream = renderToReadableStream(<App />, {
  react: React, // Optional — only needed if components use hooks
});

When provided, the serializer accesses React's internal dispatcher via React.__SERVER_INTERNALS_DO_NOT_USE_OR_WARN_USERS_THEY_CANNOT_UPGRADE (or the client equivalent). When not provided, pure server components (no hooks) work normally; hook usage throws a clear error.

This opt-in approach means @lazarv/rsc can serialize plain data structures, React elements, and even re-serialize existing Flight payloads without any React dependency whatsoever.

Abstract Module Interfaces

Where react-server-dom-webpack uses Webpack manifests, @lazarv/rsc uses two abstract interfaces:

// Server-side: how to resolve references to metadata
interface ModuleResolver {
  resolveClientReference?(reference: unknown): ClientReferenceMetadata | null;
  resolveServerReference?(reference: unknown): ServerReferenceMetadata | null;
}

// Client-side: how to load modules from metadata
interface ModuleLoader {
  preloadModule?(metadata: ClientReferenceMetadata): Promise<void> | void;
  requireModule(metadata: ClientReferenceMetadata): unknown;
  loadServerAction?(id: string): Promise<Function> | Function;
}

The framework (or any consumer) provides these implementations. @lazarv/react-server implements them by connecting to its Vite-generated module graph. Another framework could implement them against Rspack manifests, import maps, or any other module system. The Flight protocol itself is agnostic.

<Link name="serialization-engine"> ## The Serialization Engine </Link>

Reference Counting and Deduplication

Before serializing, @lazarv/rsc performs a pre-scan of the entire model tree to count how many times each object or array is referenced:

function countReferences(model) {
  const counts = new Map();
  const stack = [model];

  while (stack.length > 0) {
    const value = stack.pop();
    if (value === null || value === undefined) continue;
    if (typeof value !== "object") continue;

    // Skip types always emitted as separate chunks
    if (value instanceof Date || value instanceof RegExp ||
        ArrayBuffer.isView(value) || /* ... */) continue;

    const count = (counts.get(value) || 0) + 1;
    counts.set(value, count);
    if (count > 1) continue; // Already walked children

    // Walk children based on type (array, Map, Set, element, object)
    // ...
  }
  return counts;
}

This $O(n)$ pre-scan enables a key optimization: inline vs. outline decision. Objects referenced exactly once are inlined directly in the parent JSON row. Objects referenced more than once are emitted as separate chunks with their own IDs, and subsequent references use $<id> back-references. This preserves object identity on the client while minimizing chunk count and payload size.

Note that client and server reference deduplication (collapsing repeated I rows for the same client component, or repeated server reference chunks for the same action) is standard behavior shared with react-server-dom-webpack. The pre-scan reference counting is a separate optimization that operates on the data layer — plain objects and arrays — determining whether they can be inlined or must be outlined as separate chunks.

The serializeValue Dispatch

The core serialization function is a 360-line type dispatcher that handles every Flight-serializable type. The dispatch order is performance-critical — the most common types are checked first:

null → undefined → boolean → number (with NaN/±Infinity/−0) →
string (with large-string TEXT row optimization) → bigint → RegExp → symbol →
temporary references → client references → server references → functions →
arrays (inline vs. outline) → React elements → Promises → Date → Map → Set →
ReadableStream → Blob → async iterables → TypedArrays → ArrayBuffer →
FormData → URL → URLSearchParams → Error → plain objects (inline vs. outline)

Each type maps to a specific wire-format encoding. The encodings use single-character prefixes ($D for Date, $Q for Map, $W for Set, $n for BigInt, $S for Symbol, etc.) that match the official protocol.

Large String Optimization

Strings above 1KB are serialized using a length-prefixed binary row format instead of JSON:

if (value.length >= TEXT_CHUNK_SIZE) {
  const id = request.getNextChunkId();
  const textBytes = encoder.encode(value);
  const hexLength = textBytes.byteLength.toString(16);
  const headerStr = `${id}:T${hexLength},`;
  // ... emit as binary chunk ...
  return "$" + id;
}

This avoids the overhead of JSON.stringify() for large strings — no quoting, no escape character processing, no extra allocations. The hex-length prefix tells the parser exactly how many bytes to read, enabling zero-copy consumption on the deserialization side.

Microtask-Coalesced Chunk Flushing

During synchronous serialization, multiple rows are produced (the root model, shared object chunks, client reference chunks, server reference chunks). The official implementation flushes each row individually, producing one ReadableStream.enqueue() call per row.

@lazarv/rsc suppresses flushing during synchronous work and batches all rows into a single enqueue() call:

function startWork(request) {
  // Suppress per-writeChunk flushing
  const wasFlowing = request.flowing;
  request.flowing = false;

  try {
    const serialized = serializeValue(request, request.model, null, null);
    const row = request.serializeModelRow(0, serialized);
    request.writeChunk(row);

    // Restore flowing and flush ALL rows in one batch
    request.flowing = wasFlowing;
    if (request.flowing && request.destination) {
      request.flushChunks();
    }
  } catch (error) {
    // ...
  }
}

The flush itself coalesces all buffered chunks into a single Uint8Array:

flushChunks() {
  if (this.completedChunks.length === 0) return;
  const chunks = this.completedChunks;
  this.completedChunks = []; // Swap-first for re-entrancy safety

  // Encode and merge
  const encoded = Array.from({ length: chunks.length });
  let totalLength = 0;
  for (let i = 0; i < chunks.length; i++) {
    const chunk = chunks[i];
    encoded[i] = chunk instanceof Uint8Array ? chunk : encoder.encode(chunk);
    totalLength += encoded[i].length;
  }

  if (encoded.length === 1) {
    this.destination.enqueue(encoded[0]);
  } else {
    const merged = new Uint8Array(totalLength);
    let offset = 0;
    for (const e of encoded) { merged.set(e, offset); offset += e.length; }
    this.destination.enqueue(merged);
  }
}

This produces fewer ReadableStream reads on the consumer, fewer <script> tags in SSR HTML (when inlining flight data), and less cross-thread MessagePort traffic when the stream is transferred to an SSR worker.

The Swap-First Re-Entrancy Guard

The flushChunks method uses a swap-first pattern: it snapshots the current queue into a local variable and atomically replaces this.completedChunks with a fresh empty array before iterating.

This is not a theoretical concern — it is a battle-tested fix for a production bug. Node.js's ReadableStream implementation can fire pull() as a synchronous microtask during controller.enqueue() when a pending reader is waiting. Without the swap, the re-entrant pull() handler calls flushChunks() again, sees the same unflushed array, and writes the in-flight chunk a second time — producing duplicate rows on the flight stream.

The swap makes re-entrant flushes no-ops: they see an empty queue and return immediately. New chunks produced during the re-entrant path push to the fresh array, which is drained by the next flush cycle.

<Link name="deserialization-engine"> ## The Deserialization Engine </Link>

Lazy Promise Allocation

The official deserializer creates a Promise for every chunk, even when the chunk resolves synchronously (which is the common case — most chunks are resolved during the same processData() call that creates them).

@lazarv/rsc defers Promise allocation:

createChunk(id) {
  return {
    id,
    status: PENDING,
    value: undefined,
    _promise: null,   // No Promise allocated
    _resolve: null,
    _reject: null,
  };
}

_ensurePromise(chunk) {
  if (chunk._promise !== null) return chunk._promise;

  if (chunk.status === RESOLVED) {
    // Already resolved — return pre-settled promise
    const p = Promise.resolve(chunk.value);
    p.status = "fulfilled";
    p.value = chunk.value;
    chunk._promise = p;
  } else {
    // Still pending — allocate real promise
    const p = new Promise((res, rej) => {
      chunk._resolve = res;
      chunk._reject = rej;
    });
    p.catch(() => {}); // suppress unhandled rejection
    p.status = "pending";
    chunk._promise = p;
  }
  return chunk._promise;
}

For a typical flight payload with hundreds of chunks, the vast majority resolve synchronously and never need a Promise. This saves two closure allocations and one Promise allocation per chunk — a measurable improvement on large payloads.

Element Tuple Scanner

React elements are serialized as JSON arrays: ["$", type, key, props]. When a row contains a large props object (200KB+ is common for content-heavy pages), JSON.parse() of the entire row is expensive.

@lazarv/rsc implements a custom scanner that extracts the element header fields without parsing the props:

function _scanElementTuple(str) {
  // Caller verified: str starts with '["$",'
  let i = 5; // past '["$",'

  // Field 1: type — usually short ("div", "$L1")
  const typeStart = i;
  i = _skipJsonValue(str, i);
  const typeEnd = i;

  // ... skip comma, whitespace ...

  // Field 2: key — usually null or a short string
  const keyStart = i;
  i = _skipJsonValue(str, i);
  const keyEnd = i;

  // Parse only the tiny header fields
  const type = JSON.parse(str.slice(typeStart, typeEnd));
  const key = JSON.parse(str.slice(keyStart, keyEnd));

  // Everything after is the raw props string — NOT parsed yet
  return { type, key, rawPropsStr: str.slice(i + 1, len - 1) };
}

The _skipJsonValue helper scans past a JSON value character-by-character without allocating any objects. For element headers (type + key), this is $O(1)$ with respect to props size. The raw props string is parsed lazily — only when the deserialized element's props are actually accessed.

A companion _scanObjectRow function applies the same technique to plain object rows, identifying key→value boundaries without parsing large nested values eagerly.

Module Import Buffering

The Flight protocol emits I (Import/Module) rows before the model rows that reference them. When module loading is synchronous (as in production bundles), the module chunk is resolved before the model row that references it. But when module loading is asynchronous (as in Vite's dev server, where import() is used), the module chunk may still be PENDING when the model row arrives.

@lazarv/rsc handles this by buffering model rows when async imports are in flight:

// In the consume loop:
if (this.pendingModuleImports.length > 0) {
  this.pendingRows.push(rowData);
} else {
  processRow(rowData);
}

// After imports settle:
async flushPendingRows() {
  await Promise.all(this.pendingModuleImports);
  this.pendingModuleImports.length = 0;
  for (const row of this.pendingRows) {
    processRow(row);
  }
  this.pendingRows.length = 0;
}

This avoids the need for lazy wrappers or deferred resolution — when the buffered rows are replayed, all module chunks are already resolved, and element construction proceeds synchronously.

<Link name="synchronous-mode"> ## Synchronous Serialization Mode </Link>

@lazarv/rsc provides a synchronous round-trip path that no other Flight implementation offers:

import { syncToBuffer } from "@lazarv/rsc/server";
import { syncFromBuffer } from "@lazarv/rsc/client";

// Serialize to Uint8Array — no Promises, no streams
const buffer = syncToBuffer(model, options);

// Deserialize from Uint8Array — synchronous, returns value directly
const value = syncFromBuffer(buffer, options);

Motivation

Not all Flight protocol use cases involve network streaming. In @lazarv/react-server, the synchronous mode powers two systems:

  1. Cache providers. UI and data snapshots are serialized to binary buffers for storage in any backend (Redis, filesystem, SQLite). The Flight protocol's rich type support (Map, Set, Date, BigInt, TypedArrays, React elements with client references) makes it superior to JSON.stringify() for caching — and the synchronous API avoids the complexity of stream management in cache read/write paths.
  2. Logger proxy. Structured log data is serialized across environment boundaries using the Flight protocol, preserving type fidelity that JSON.stringify() would destroy (Date becomes ISO string, Map becomes {}, Set becomes [], BigInt throws).

Implementation

syncToBuffer uses the same FlightRequest and startWork machinery as renderToReadableStream, but replaces the ReadableStream controller with a simple array collector:

export function syncToBuffer(model, options = {}) {
  const request = new FlightRequest(model, options);
  const chunks = [];

  request.destination = {
    enqueue(chunk) {
      chunks.push(chunk instanceof Uint8Array ? chunk : encoder.encode(chunk));
    },
    close() {},
    error() {},
  };
  request.flowing = true;

  startWork(request);
  request.flushChunks();

  // Concatenate into single Uint8Array
  let totalLength = 0;
  for (const chunk of chunks) totalLength += chunk.length;
  const result = new Uint8Array(totalLength);
  let offset = 0;
  for (const chunk of chunks) { result.set(chunk, offset); offset += chunk.length; }
  return result;
}

syncFromBuffer feeds the entire buffer into the Flight parser in one call and returns the resolved root value directly. Because there are no Promises or async iterables in the input, all chunks resolve synchronously and the result is available immediately.

Type Preservation vs. JSON

The synchronous mode preserves types that JSON.stringify() / JSON.parse() destroys:

Type JSON.stringifyJSON.parse syncToBuffersyncFromBuffer
Date ISO string (lossy) Date object
Map {} (lossy) Map
Set [] (lossy) Set
BigInt throws BigInt
undefined omitted undefined
-0 0 (lossy) -0
NaN null (lossy) NaN
Infinity null (lossy) Infinity
RegExp {} (lossy) RegExp
Symbol.for() throws Symbol
Uint8Array {0:…, 1:…} (lossy) Uint8Array
Circular refs throws Preserved via $<id>

This makes syncToBuffer / syncFromBuffer a drop-in replacement for JSON.stringify() / JSON.parse() that handles the full JavaScript type system.

<Link name="wire-format"> ## Wire Format Details </Link>

Row Format

Each row in the Flight stream follows the format:

<id>:<tag><payload>\n

Where <id> is a decimal chunk identifier, <tag> is a single character indicating the row type, and <payload> is the row content (typically JSON). The root model is always chunk ID 0.

Tag Name Payload
(empty) Model JSON value
I Import [moduleId, chunks, exportName] or [moduleId, chunks, exportName, 1] (async)
E Error {"message": "...", "stack": "...", "digest": "..."}
H Hint Preload hint metadata
D Debug Debug info (dev mode only)
T Text Length-prefixed raw UTF-8 text
B Binary Base64-encoded binary chunk
P Postpone PPR postpone marker
W Console Console log replay data
N Nonce Timestamp (dev mode timing)

Binary rows (TypedArrays, ArrayBuffer, Text) use a length-prefixed format instead of newline termination:

<id>:<tag><hex_length>,<binary_data>

Value Encoding Prefixes

Serialized values within JSON payloads use single-character prefixes to encode non-JSON types:

Prefix Type Example
$undefined undefined
$NaN NaN
$Infinity Infinity
$-Infinity -Infinity
$-0 Negative zero
$n BigInt $n12345678901234567890
$D Date $D2024-06-15T12:00:00.000Z
$S Symbol $Sbench.symbol
$R RegExp $R/pattern/flags
$Q Map $Q<id> (entries in separate chunk)
$W Set $W<id> (items in separate chunk)
$L Client reference $L<id>
$h Server reference $h<id>
$T Temporary reference $T<path>
$l URL $lhttps://example.com
$U URLSearchParams $U[["a","1"],["b","2"]]
$K FormData $K[["field","value"]]
$Z Error $Z{"name":"...","message":"..."}
$<id> Back-reference Points to previously emitted chunk
@<id> Promise Points to async chunk
<Link name="server-component-rendering"> ## Server Component Rendering </Link>

When the serializer encounters a React element whose type is a function (not a client reference), it renders it as a server component — calling the function with its props and serializing the return value.

Hooks Dispatcher

Server components can use a subset of React hooks: use(), useId(), useMemo(), useCallback(), useMemoCache(). These require React's internal dispatcher (internals.H) to be set correctly during the component call.

@lazarv/rsc implements a minimal dispatcher that supports these hooks without importing React:

const HooksDispatcher = {
  use(usable) {
    if (typeof usable.then === "function") {
      return trackUsedThenable(thenableState, usable, thenableIndexCounter++);
    }
    // ...
  },
  useId() {
    return "_" + (currentRequest.identifierPrefix || "S") + "_" +
           currentRequest.identifierCount++.toString(32) + "_";
  },
  useMemo(nextCreate) { return nextCreate(); },
  useCallback(callback) { return callback; },
  useMemoCache(size) {
    const data = Array(size);
    for (let i = 0; i < size; i++) data[i] = REACT_MEMO_CACHE_SENTINEL;
    return data;
  },
  // Unsupported hooks throw clear errors
  useState: unsupportedHook,
  useEffect: unsupportedHook,
  useReducer: unsupportedHook,
  // ...
};

The dispatcher is installed before each component call and restored afterwards via callComponentWithDispatcher, which also handles the SuspenseException thrown by use() when it encounters an unresolved thenable.

Suspense and Retry

When use() encounters a pending Promise, it throws a SuspenseException — a sentinel error that signals the serializer to retry the component after the Promise resolves. @lazarv/rsc handles this with a retry chain:

function retryComponentRender(request, type, props, savedState, ...) {
  return new Promise((resolve, reject) => {
    function attempt(prevThenableState, waitFor) {
      waitFor.then(() => {
        try {
          const result = callComponentWithDispatcher(
            request, type, props, prevThenableState
          );
          resolve(result);
        } catch (error) {
          if (isThenable(error)) {
            // Component suspended again — chain another retry
            attempt(error._savedThenableState, error);
          } else {
            reject(error);
          }
        }
      }, reject);
    }
    attempt(savedState, blockedThenable);
  });
}

Each retry restores the thenableState from the previous attempt, ensuring that use() calls before the suspension point return their cached values. This matches React's per-task retry semantics.

<Link name="security"> ## Security </Link>

Flight payloads cross a trust boundary — and in both directions. The server-to-client stream carries rendered UI and data from a trusted origin to an untrusted environment; the client-to-server reply carries arbitrary attacker-controlled bytes back into the server process to dispatch a server action. The second direction is the more dangerous one: a flawed decoder becomes a remote code execution surface. @lazarv/rsc therefore ships a dedicated, hardened reply decoder that matches the security barriers React added after CVE-2025-55182, with additional defenses layered on top.

The threat model assumes:

  1. the client is hostile
  2. the transport is untrusted
  3. server references must only execute functions the host has explicitly registered
  4. every decode must terminate in bounded time and memory

Prototype Pollution Barriers

JSON parsing of an attacker-controlled payload is the classic prototype-pollution vector: a key of __proto__ becomes an own property that mutates Object.prototype when assigned to a fresh object. The decoder blocks this in two independent places:

  1. Reviver-side stripping. JSON.parse runs with a reviver that returns undefined for any key in FORBIDDEN_KEYS = { __proto__, constructor, prototype }. These keys are removed before they become own properties of the parsed tree.
  2. Path-walk enforcement. Back-references of the form $<hex>:<key>:<key> that walk into a previously decoded chunk must satisfy four invariants at every step:
if (current === null ||
    typeof current !== "object" ||
    (Object.getPrototypeOf(current) !== ObjectPrototype &&
    Object.getPrototypeOf(current) !== ArrayPrototype) ||
    !hasOwn.call(current, key) ||
    FORBIDDEN_KEYS.has(key)) {
  throw new DecodeError("Invalid reference.");
}

The prototype pin blocks walks into non-plain objects — a forged path cannot reach Map.prototype.get or Function.prototype.call. The own-property check blocks .constructor, .then, .map, and any other inherited name. The forbidden-key set is defense-in-depth in case a forbidden key survives the reviver.

Thenable Scrubbing

Duck-typed thenables are a subtle escalation path: any object with a callable then method will be awaited as a Promise by downstream code, handing the attacker a synchronous callback on the server. The decoder scrubs every then property whose value is a function, replacing it with null during the walk phase. Non-function then values (strings, objects, numbers) are preserved — they are data, not a capability.

Callable Allowlist

No code path in the decoder constructs a callable from decoded bytes. Concretely:

  • No eval, no new Function, no dynamic import() on decoded data. The only new Function call in the entire package is the client-side RegExp deserializer (new Function("return " + regexStr)), which parses a regex literal emitted by a trusted server — never by a client.
  • Server references are allowlist-gated. A $h<id> tag resolves exclusively through moduleLoader.loadServerAction(id), which the host implements. The host is responsible for ensuring that only functions explicitly registered via "use server" are reachable; the decoder never invents function identities from strings.
  • Temporary references are opaque proxies. A $T<path> tag resolves to an entry in a request-scoped temporaryReferences map that throws on any access other than the registered one. Temporary references are scoped to the request that created them — a reply cannot reach a temporary reference from a different request.

The attacker's reach through a Flight reply is therefore bounded by the set of functions the host has chosen to register. The decoder cannot be tricked into widening that set.

Resource Ceilings

A decoder that runs in bounded memory and bounded time is a denial-of-service-resistant decoder. Every reply decode is gated by seven explicit ceilings:

export const DEFAULT_LIMITS = Object.freeze({
  maxRows: 10_000,
  maxDepth: 128,
  maxBytes: 32 * 1024 * 1024,
  maxBoundArgs: 256,        // matches React
  maxBigIntDigits: 4096,    // matches React
  maxStringLength: 16 * 1024 * 1024,
  maxStreamChunks: 10_000,
});

Any limit breach throws a DecodeLimitError tagged with the specific limit name and the observed value, making attacks observable in logs. Hosts override these per-request to match their threat model — stricter on public endpoints, relaxed on authenticated internal services. The server-function-limits feature in @lazarv/react-server surfaces this configuration to application code.

Error Redaction

Serialized errors cross the network as E rows of the form {"message":"...","stack":"...","digest":"..."}. Sending raw server error messages and stacks to a hostile client leaks file paths, library versions, schema fragments, and occasionally secrets embedded in query strings. @lazarv/rsc matches React's convention: in production, only digest is transmitted. The host registers an onError callback that receives the full Error server-side — for logging, tracing, and ticket correlation — and returns an opaque digest that the client uses solely for error-boundary matching. The full error never leaves the server process.

What the Package Does Not Defend Against

The following are deliberately out of scope for the serializer and must be enforced by the host:

  • Authentication and authorization on server references. Registration is capability-based: any registered function is callable by any client that can reach the Flight reply endpoint. CSRF protection, origin checks, session validation, and per-action authorization are the host's responsibility — typically implemented as middleware around the reply endpoint.
  • Transport-level integrity of bound arguments. The decoder does not sign or encrypt payloads. Bound-argument encryption is provided as a separate feature in @lazarv/react-server (see Server Function Encryption) and operates at the framework layer above the protocol.
  • Request-rate limiting. Resource ceilings bound a single decode; they do not bound request volume. Hosts should rate-limit the reply endpoint like any other action endpoint.
<Link name="benchmarks"> ## Benchmark Results </Link>

All benchmarks were run locally using Vitest's bench mode against identical fixtures. The react-server-dom-webpack package uses the same React experimental version (0.0.0-experimental-ab18f33d-20260220). Each scenario runs for at least 100 iterations with warmup.

Serialization (renderToReadableStream)

Scenario @lazarv/rsc (ops/s) webpack (ops/s) Speedup
react: minimal element 659,111 101,468 6.5×
react: shallow wide (1,000) 4,594 777 5.9×
react: deep nested (100) 31,043 13,457 2.3×
react: product list (50) 13,423 4,709 2.8×
react: large table (500×10) 650 212 3.1×
data: primitives 470,899 95,603 4.9×
data: large string (100KB) 16,230 12,671 1.3×
data: nested objects (20) 118,966 65,035 1.8×
data: large array (10K) 299 296 1.0×
data: Map & Set 23,300 14,179 1.6×
data: Date/BigInt/Symbol 442,356 91,292 4.8×
data: typed arrays 120,916 28,574 4.2×
data: mixed payload 17,901 10,868 1.6×

The largest gains appear on workloads dominated by React element construction and type dispatch overhead. The minimal element benchmark — a single <div> with text — isolates per-element overhead, where @lazarv/rsc is 6.5× faster. The large array benchmark (10,000 plain objects) is I/O-bound by JSON.stringify() on both sides, showing near-parity.

Deserialization (createFromReadableStream)

Scenario @lazarv/rsc (ops/s) webpack (ops/s) Speedup
react: minimal element 476,960 381,668 1.2×
react: shallow wide (1,000) 44,071 3,947 11.2×
react: deep nested (100) 239,453 41,968 5.7×
react: product list (50) 99,581 28,174 3.5×
react: large table (500×10) 9,313 3,233 2.9×
data: primitives 371,182 346,218 1.1×
data: large string (100KB) 75,716 75,692 1.0×
data: nested objects (20) 193,801 160,777 1.2×
data: large array (10K) 687 657 1.0×
data: Map & Set 33,833 30,537 1.1×
data: Date/BigInt/Symbol 401,727 311,983 1.3×
data: typed arrays 125,078 105,250 1.2×
data: mixed payload 49,114 35,550 1.4×

Deserialization gains are most dramatic on wide trees: shallow wide (1,000 siblings) is 11.2× faster. This is where the element tuple scanner and lazy promise allocation have the greatest impact — 1,000 elements means 1,000 chunks where @lazarv/rsc avoids Promise allocation for synchronously-resolved chunks.

Roundtrip (serialize + deserialize)

Scenario @lazarv/rsc (ops/s) webpack (ops/s) Speedup
react: minimal element 348,494 84,554 4.1×
react: shallow wide (1,000) 4,039 668 6.0×
react: deep nested (100) 27,359 10,081 2.7×
react: product list (50) 11,887 3,969 3.0×
react: large table (500×10) 621 209 3.0×
data: primitives 231,206 78,626 2.9×
data: large string (100KB) 14,046 12,174 1.2×
data: nested objects (20) 77,477 47,783 1.6×
data: large array (10K) 205 204 1.0×
data: Map & Set 13,606 9,829 1.4×
data: Date/BigInt/Symbol 238,041 72,715 3.3×
data: typed arrays 84,489 26,101 3.2×
data: mixed payload 12,938 8,383 1.5×

Prerender (prerender)

@lazarv/rsc also provides a prerender API that serializes a model to a static prelude ReadableStream, waiting for all async work to complete before returning. Selected results:

Scenario ops/s Mean latency
react: minimal element 719,453 1.4 µs
react: product list (50) 12,670 78.9 µs
react: large table (500×10) 613 1.63 ms
data: primitives 519,470 1.9 µs
data: mixed payload 18,621 53.7 µs

A Note on webpack Benchmark Stability

The react-server-dom-webpack benchmarks exhibit significantly higher variance (RME of 9–31%) compared to @lazarv/rsc (RME of 0.2–2.6%). This suggests architectural differences in allocation patterns — higher GC pressure leads to more variable latency. The @lazarv/rsc numbers are more stable, indicating fewer and more predictable allocations per operation.

Additionally, react-server-dom-webpack's serializer detaches ArrayBuffer backing stores during TypedArray serialization, requiring fresh fixture construction on each benchmark iteration for TypedArray scenarios. @lazarv/rsc uses non-destructive reads (new Uint8Array(value.buffer, value.byteOffset, value.byteLength)), allowing fixture reuse.

<Link name="comparison"> ## Comparison with `react-server-dom-webpack` </Link>
Dimension react-server-dom-webpack @lazarv/rsc
Bundler Webpack only (plugin + manifest) Any (abstract ModuleResolver interface)
Runtime Node.js (server) + browser (client) Any Web Platform runtime
Export condition Requires react-server condition No condition required
React dependency Direct import from react Symbol.for() + optional instance
Entry points 4 (server.node, server.edge, client.browser, client.node) 2 (server, client) — same code everywhere
Synchronous mode Not available syncToBuffer / syncFromBuffer
Object deduplication Client/server reference dedup Client/server reference dedup + pre-scan reference counting for data objects
Chunk flushing Per-row Microtask-coalesced with swap-first re-entrancy guard
Promise allocation Eager (every chunk) Lazy (only when awaited)
Element parsing Full JSON.parse per row O(1) header scan + deferred props parse
TypedArray handling Destructive (detaches ArrayBuffer) Non-destructive (view-based read)
Flight protocol Full Full parity
renderToReadableStream
renderToPipeableStream ✅ (Node.js only) — (use ReadableStream everywhere)
createFromReadableStream
createFromNodeStream ✅ (Node.js only) — (use ReadableStream everywhere)
encodeReply / decodeReply
Temporary references
Bound actions (.bind())
Error digest propagation
Prerender
<Link name="use-cases"> ## Use Cases Beyond Rendering </Link>

The decoupled design of @lazarv/rsc enables Flight protocol usage beyond the traditional "server renders, client hydrates" pattern:

Direction Use Case Why Flight over JSON
Server → Client Streaming RSC rendering Standard use case
Server → Cache Snapshot storage in Redis/filesystem/SQLite Type preservation (Map, Set, Date, BigInt, client refs)
Cache → Server Snapshot restoration Reconstructs full React element trees with references
Worker → Main Cross-thread React tree transfer Structured clone alternative with streaming
Process → Process IPC with typed data Flight handles types that JSON.stringify() destroys
Server → Logger Structured log serialization Preserves type fidelity across environment boundaries
Any → Any General-purpose serialization syncToBuffer/syncFromBuffer as a better JSON.stringify/JSON.parse
<Link name="conclusion"> ## Conclusion </Link>

@lazarv/rsc demonstrates that the Flight protocol — React's serialization format for Server Components — can be cleanly separated from its three historical coupling points: Webpack, Node.js, and the react-server export condition. The key architectural insights are:

  1. Symbol.for() replaces imports. React's type system is built on global symbols. Any code can detect React elements, client references, and server references without importing React — making the react-server export condition unnecessary for serialization.
  2. Abstract interfaces replace manifests. Bundler-specific manifest formats are an implementation detail. An abstract ModuleResolver / ModuleLoader interface lets any bundler or runtime provide its own module resolution logic without adapter shims.
  3. Web Platform APIs replace Node.js APIs. ReadableStream runs everywhere. A single entry point per side (server/client) eliminates the four-way platform split and the conditional import complexity it creates.
  4. Pre-scan reference counting enables inline/outline decisions for data objects. A single $O(n)$ walk before serialization determines which objects can be inlined and which must be outlined as separate chunks to preserve identity — reducing chunk count and payload size.
  5. Microtask-coalesced flushing with swap-first re-entrancy safety produces fewer, larger stream chunks — reducing downstream costs in SSR HTML injection, cross-thread transfer, and consumer iteration.
  6. Lazy Promise allocation and O(1) element scanning remove per-chunk overhead on the deserialization path, with the largest impact on wide trees common in real applications.
  7. Synchronous mode unlocks the Flight protocol for use cases (caching, logging, IPC) that do not involve network streaming, making it a general-purpose typed serialization format.

The result is a Flight protocol implementation that is faster, more portable, and more versatile than the official package — while maintaining full wire-format compatibility. Any framework, runtime, or tool can adopt RSC serialization by depending on @lazarv/rsc and implementing two interfaces.