Notes on Zero by Rocicorp

May 9, 2025

Figuring Out How Zero Connects: My Notes on Its Architecture

I’ve been diving into Zero, the new query-powered sync engine, and I'm genuinely impressed. It’s still in alpha, so things are evolving, but the core ideas are incredibly compelling for building fast, reactive web apps.

One thing I found myself sketching out was a clear mental model of how all its main pieces talk to each other, especially with the new Custom Mutators API (which, by the way, is a fantastic move away from the older client-declared CRUD updates with a separate permissions system – having server-authoritative mutations makes so much more sense).

So, here are my notes on the moving parts, for anyone else trying to get a quick grasp:

At the heart of it, you have your PostgreSQL Database. This is your single source of truth. zero-cache listens to this database using Postgres Event Triggers and logical replication, so it knows about every change.

Then there's the zero-cache Service itself. This is a standalone process you run. It takes the data replicated from Postgres, keeps an optimized SQLite replica, and is the thing your clients actually talk to. Client applications connect to zero-cache (usually localhost:4848 in dev) via a persistent WebSocket. This WebSocket is key: zero-cache streams reconciled, real-time updates down it. This means if data changes – whether from the current user, another browser tab, or even a direct database update elsewhere – all connected clients see it live in their queries. It also means client-side mutations can be "fire-and-forget"; you don't await them, you just trust the reconciled state will stream back. However all mutations are reflected immediately – letting the reconciliation be a "second pass" update that the client rarely sees or has to worry about. When your client does want to change data using a custom mutator, zero-cache passes that request on by calling a /push endpoint on your application backend.

This brings us to Your Application Backend. This is your own server-side code (Node.js, etc.) where your business logic lives. You implement a /push HTTP endpoint here. When zero-cache calls this endpoint with a mutation request, your backend code executes the authoritative logic – validation, any side effects, and crucially, writing the actual changes to your PostgreSQL database.

Defining what data zero-cache and your clients care about is done via a TypeScript Schema (schema.ts). This file declares the tables, columns, and relationships. It’s used by zero-cache to know what to replicate and by your client for type-safe queries. Your /push endpoint might also use it if you're leveraging Zero's server-side ZQL helpers.

And defining how data changes is the job of TypeScript Custom Mutators (mutators.ts). These are functions you write. There's a client-side part that runs immediately for that snappy optimistic UI update. Then, the server-side part of that mutator is what your Application Backend executes via the /push endpoint to make the real change in Postgres.

Finally, Your Client Application (React, SolidJS, etc.) uses the Zero client library. It connects to zero-cache over WebSockets, uses the schema and mutators, and subscribes to live queries that just... update. Zero provides great React hooks for live queries and mutations.

The flow feels really well thought out: client optimistic update -> zero-cache -> your backend's /push endpoint -> backend writes to Postgres -> Postgres replication tells zero-cache -> zero-cache streams update to all clients.

I'm evaluating Zero quite seriously at Trip To Japan for parts of our internal admin tooling. The promise of "it just works" real-time data without complex state management is very appealing.

It's early days for Zero, but the architecture, especially with custom mutators handling writes authoritatively on the server, feels like a solid foundation. Definitely one to watch.

If you're curious, here's my /push handler that I plomped into Next.js:

import { hmac } from "@oslojs/crypto/hmac";
import { SHA256 } from "@oslojs/crypto/sha2";
import { constantTimeEqual } from "@oslojs/crypto/subtle";
import {
  joseAlgorithmHS256,
  JWSRegisteredHeaders,
  JWTRegisteredClaims,
  parseJWT,
} from "@oslojs/jwt";
import type { JSONValue } from "@rocicorp/zero";
import {
  PushProcessor,
  ZQLDatabase,
  type DBConnection,
  type DBTransaction,
  type Row,
} from "@rocicorp/zero/pg";
import { NextResponse, type NextRequest } from "next/server";
import { Client, type ClientBase } from "pg";
import { z } from "zod";

import { createMutators, schema } from "@trip/zero";

import { env } from "~/env";

const payloadSchema = z
  .object({
    sub: z.string(),
  })
  .passthrough();

async function handler(request: NextRequest) {
  // JSON parsing moved before client connection
  const json = await request
    .json()
    .then((data) => data as JSONValue)
    .catch(() => {
      return {};
    });

  const [header, payload, signature, signatureMessage] = parseJWT(
    request.headers.get("Authorization")?.replace("Bearer ", "") ?? "",
  );

  // Check if the JWT algorithm is valid
  const headerParameters = new JWSRegisteredHeaders(header);
  if (headerParameters.algorithm() !== joseAlgorithmHS256) {
    return new NextResponse("Unsupported algorithm", { status: 401 });
  }

  // Check expiration
  const claims = new JWTRegisteredClaims(payload);
  if (!claims.verifyExpiration()) {
    return new NextResponse("Token expired", { status: 401 });
  }

  // Verify signature
  const secretKey = new TextEncoder().encode(env.SECRET);
  const expectedSignature = hmac(SHA256, secretKey, signatureMessage);

  if (
    expectedSignature.length !== signature.length ||
    !constantTimeEqual(expectedSignature, signature)
  ) {
    return new NextResponse("Invalid signature", { status: 401 });
  }

  const payloadResult = payloadSchema.safeParse(payload);
  if (!payloadResult.success) {
    return new NextResponse("Invalid payload", { status: 401 });
  }
  const { sub } = payloadResult.data;

  const mutators = createMutators({
    sub,
  });

  let client: Client | undefined;

  try {
    client = new Client({
      connectionString: env.DATABASE_URL,
    });
    await client.connect();

    const processor = new PushProcessor(
      new ZQLDatabase(new PgConnection(client), schema),
    );
    const searchParams = request.nextUrl.searchParams;

    const response = await processor.process(mutators, searchParams, json);

    return NextResponse.json(response);
  } catch {
    return NextResponse.json(
      {
        error: "Error processing request",
      },
      {
        status: 500,
      },
    );
  } finally {
    if (client) {
      await client.end();
    }
  }
}

export class PgConnection implements DBConnection<ClientBase> {
  readonly #client: ClientBase;

  constructor(client: ClientBase) {
    this.#client = client;
  }

  async query(sql: string, params: unknown[]): Promise<Row[]> {
    const result = await this.#client.query<Row>(sql, params as JSONValue[]);
    return result.rows;
  }

  async transaction<T>(
    fn: (tx: DBTransaction<ClientBase>) => Promise<T>,
  ): Promise<T> {
    if (!(this.#client instanceof Client)) {
      throw new Error("Transactions require a non-pooled Client instance");
    }

    const tx = new PgTransaction(this.#client);

    try {
      await this.#client.query("BEGIN");
      const result = await fn(tx);
      await this.#client.query("COMMIT");
      return result;
    } catch (error) {
      await this.#client.query("ROLLBACK");
      throw error;
    }
  }
}

class PgTransaction implements DBTransaction<ClientBase> {
  readonly wrappedTransaction: ClientBase;

  constructor(client: ClientBase) {
    this.wrappedTransaction = client;
  }

  async query(sql: string, params: unknown[]): Promise<Row[]> {
    const result = await this.wrappedTransaction.query<Row>(
      sql,
      params as JSONValue[],
    );
    return result.rows;
  }
}

export function pgConnectionProvider(client: Client): () => PgConnection {
  return () => new PgConnection(client);
}

export { handler as GET, handler as POST };

And here's a layout.tsx component that passes down a JWT that is used for auth. Your mileage may vary:

import { hmac } from "@oslojs/crypto/hmac";
import { SHA256 } from "@oslojs/crypto/sha2";
import {
  createJWTSignatureMessage,
  encodeJWT,
  joseAlgorithmHS256,
} from "@oslojs/jwt";

import { withDb } from "~/lib/db";
import { requireAdmin } from "~/server/session";

import { Provider } from "./_components/provider";

import "server-only";

import { env } from "~/env";

export async function getZeroJWT(userId: string): Promise<string> {
  const header = JSON.stringify({ alg: joseAlgorithmHS256 });
  const payload = JSON.stringify({
    sub: userId,
    iat: Math.floor(Date.now() / 1000),
    exp: Math.floor(Date.now() / 1000) + 30 * 24 * 60 * 60, // 30 days expiration
  });

  const secretKey = new TextEncoder().encode(env.SECRET);
  const signatureMessage = createJWTSignatureMessage(header, payload);

  const signature = hmac(SHA256, secretKey, signatureMessage);

  return encodeJWT(header, payload, signature);
}

export default async function Layout({
  children,
}: {
  children: React.ReactNode;
}) {
  const adminUser = await withDb(requireAdmin);
  return (
    <Provider auth={await getZeroJWT(adminUser.id)} user={adminUser}>
      {children}
    </Provider>
  );
}

The provider just wraps <ZeroProvider>:

"use client";

import { Zero } from "@rocicorp/zero";
import { ZeroProvider } from "@rocicorp/zero/react";
import type { ReactNode } from "react";

import { createMutators, schema } from "@trip/zero";

import type { SessionUser } from "~/server/session";

export function Provider({
  children,
  auth,
  user,
}: {
  auth: string;
  user: SessionUser;
  children: ReactNode;
}) {
  const zero = new Zero({
    userID: user.id,
    auth,
    server: "http://localhost:4848",
    schema,
    mutators: createMutators({ sub: user.id }),
  });
  return <ZeroProvider zero={zero}>{children}</ZeroProvider>;
}