Zero Schema
Zero applications have both a database schema (the normal backend schema all web apps have) and a Zero schema.
The Zero schema is conventionally located in schema.ts in your app's source code. The Zero schema serves two purposes:
- Provide typesafety for ZQL queries
- Define first-class relationships between tables
The Zero schema is usually generated from your backend schema, but can be defined by hand for more control.
Generating from Database
If you use Drizzle or Prisma ORM, you can generate schema.ts with drizzle-zero or prisma-zero:
npm install -D drizzle-zero
npx drizzle-zero generateWriting by Hand
You can also write Zero schemas by hand for full control.
Table Schemas
Use the table function to define each table in your Zero schema:
import {table, string, boolean} from '@rocicorp/zero'
const user = table('user')
.columns({
id: string(),
name: string(),
partner: boolean()
})
.primaryKey('id')Column types are defined with the boolean(), number(), string(), json(), and enumeration() helpers. See Column Types for how database types are mapped to these types.
Name Mapping
Use from() to map a TypeScript table or column name to a different database name:
const userPref = table('userPref')
// Map TS "userPref" to DB name "user_pref"
.from('user_pref')
.columns({
id: string(),
// Map TS "orgID" to DB name "org_id"
orgID: string().from('org_id')
})Multiple Schemas
You can also use from() to access other Postgres schemas:
// Sync the "event" table from the "analytics" schema.
const event = table('event').from('analytics.event')Optional Columns
Columns can be marked optional. This corresponds to the SQL concept nullable.
const user = table('user')
.columns({
id: string(),
name: string(),
nickName: string().optional()
})
.primaryKey('id')An optional column can store a value of the specified type or null to mean no value.
Enumerations
Use the enumeration helper to define a column that can only take on a specific set of values. This is most often used alongside an enum Postgres column type.
import {table, string, enumeration} from '@rocicorp/zero'
const user = table('user')
.columns({
id: string(),
name: string(),
mood: enumeration<'happy' | 'sad' | 'taco'>()
})
.primaryKey('id')Custom JSON Types
Use the json helper to define a column that stores a JSON-compatible value:
import {table, string, json} from '@rocicorp/zero'
const user = table('user')
.columns({
id: string(),
name: string(),
settings: json<{theme: 'light' | 'dark'}>()
})
.primaryKey('id')Compound Primary Keys
Pass multiple columns to primaryKey to define a compound primary key:
const user = table('user')
.columns({
orgID: string(),
userID: string(),
name: string()
})
.primaryKey('orgID', 'userID')Relationships
Use the relationships function to define relationships between tables. Use the one and many helpers to define singular and plural relationships, respectively:
const messageRelationships = relationships(
message,
({one, many}) => ({
sender: one({
sourceField: ['senderID'],
destField: ['id'],
destSchema: user
}),
replies: many({
sourceField: ['id'],
destSchema: message,
destField: ['parentMessageID']
})
})
)This creates "sender" and "replies" relationships that can later be queried with the related ZQL clause:
const messagesWithSenderAndReplies = z.query.messages
.related('sender')
.related('replies')This will return an object for each message row. Each message will have a sender field that is a single User object or null, and a replies field that is an array of Message objects.
Many-to-Many Relationships
You can create many-to-many relationships by chaining the relationship definitions. Assuming issue and label tables, along with an issueLabel junction table, you can define a labels relationship like this:
const issueRelationships = relationships(
issue,
({many}) => ({
labels: many(
{
sourceField: ['id'],
destSchema: issueLabel,
destField: ['issueID']
},
{
sourceField: ['labelID'],
destSchema: label,
destField: ['id']
}
)
})
)Compound Keys Relationships
Relationships can traverse compound keys. Imagine a user table with a compound primary key of orgID and userID, and a message table with a related senderOrgID and senderUserID. This can be represented in your schema with:
const messageRelationships = relationships(
message,
({one}) => ({
sender: one({
sourceField: ['senderOrgID', 'senderUserID'],
destSchema: user,
destField: ['orgID', 'userID']
})
})
)Circular Relationships
Circular relationships are fully supported:
const commentRelationships = relationships(
comment,
({one}) => ({
parent: one({
sourceField: ['parentID'],
destSchema: comment,
destField: ['id']
})
})
)Database Schemas
Use createSchema to define the entire Zero schema:
import {createSchema} from '@rocicorp/zero'
export const schema = createSchema({
tables: [user, medium, message],
relationships: [
userRelationships,
mediumRelationships,
messageRelationships
]
})Register Schema Type
Use DefaultTypes to register the your Schema type with Zero:
declare module '@rocicorp/zero' {
interface DefaultTypes {
schema: Schema
}
}This prevents having to pass Schema manually to every Zero API.
Schema Changes
Zero applications have three components that interact with the database schema: Postgres, the API server (query/mutate endpoints), and the client.
Development
During development, you can make changes to all three components in any order:
- Change the Postgres schema
- Update the API server to use the new schema
- Update client code to use the new schema
If the Zero client ever detects that its schema is incompatible with the server, it disconnects and fires the onUpdateNeeded event. If the API server ever detects that it has an incompatible schema, it will fail with an error. Simply reloading the app fixes both issues.
Production
Zero also supports downtime-free schema changes for use in production. To achieve this, the order you deploy in matters:
- Expand (adding things): Deploy providers before consumers. DB β API β Client.
- Contract (removing things): Deploy consumers before providers. Client β API β DB.
Expand Changes
When you're adding a column, table, or new mutator/query:
- Deploy the database change and wait for it to replicate through
zero-cache.- In Cloud Zero, you can see replication status in the dashboard.
- In self-hosted
zero-cache, check the logs. - If there's backfill, wait for that to complete.
- Deploy the API server.
- Deploy the client.
For full-stack frameworks where the API and client deploy together, steps 2 and 3 are combined.
If your change doesn't affect the Postgres schema (for example, just adding a mutator that uses existing columns), skip step 1. If your change doesn't affect the API server, skip step 2.
Contract Changes
When you're removing a column, table, or mutator/query:
- Deploy the client (stop using the thing being removed).
- Deploy the API server (stop providing the thing being removed).
- Deploy the database change.
Compound Changes
Some changes are both expand and contractβlike renaming a column or changing a mutator's interface.
For these, you run both patterns in sequence:
- Expand: Add the new column/mutator. Optionally backfill data and add a trigger to keep the old column in sync.
- Contract: Remove the old column/mutator.
Examples
Adding a Column
Add a bio column to the users table:
-
Add column to database
ALTER TABLE users ADD COLUMN bio TEXT;Wait for replication.
-
Deploy API server
- Add
bioto schema.ts - Add any new queries that read
bio - Add any new mutators that write to
bio - Deploy
- Add
-
Deploy client
- Update app code to display/edit
bio - Deploy
- Update app code to display/edit
For full-stack frameworks, steps 2 and 3 are a single deploy.
Even when the API server and client are separate, they can be deployed in sequence by CI using a single PR. The client just can't be deployed until the API server is complete.
Removing a Column
Remove the bio column from the users table:
-
Deploy client
- Remove
biofrom app code - Deploy
- Remove
-
Deploy API server
- Remove mutators that write to
bio - Remove queries that read
bio - Remove
biofrom schema.ts - Deploy
- Remove mutators that write to
-
Remove column from database
ALTER TABLE users DROP COLUMN bio;
Renaming a Column
Rename nickname to displayName:
-
Add new column with trigger
ALTER TABLE users ADD COLUMN display_name TEXT; UPDATE users SET display_name = nickname; CREATE FUNCTION sync_display_name() RETURNS TRIGGER AS $$ BEGIN IF TG_OP = 'INSERT' THEN -- On insert, sync whichever column was provided IF NEW.display_name IS NULL AND NEW.nickname IS NOT NULL THEN NEW.display_name := NEW.nickname; ELSIF NEW.nickname IS NULL AND NEW.display_name IS NOT NULL THEN NEW.nickname := NEW.display_name; END IF; ELSE -- UPDATE -- Sync whichever column changed IF NEW.display_name IS DISTINCT FROM OLD.display_name AND NEW.nickname IS NOT DISTINCT FROM OLD.nickname THEN NEW.nickname := NEW.display_name; ELSIF NEW.nickname IS DISTINCT FROM OLD.nickname AND NEW.display_name IS NOT DISTINCT FROM OLD.display_name THEN NEW.display_name := NEW.nickname; END IF; END IF; RETURN NEW; END; $$ LANGUAGE plpgsql; CREATE TRIGGER sync_display_name_trigger BEFORE INSERT OR UPDATE ON users FOR EACH ROW EXECUTE FUNCTION sync_display_name();Wait for replication.
-
Deploy app using new column
- Add
displayNameto schema.ts - Update app code to read/write
displayName - Update queries to read/write
displayName - Update mutators to use
displayName - Deploy API β Client
- Add
-
Remove old column
- Remove
nicknamefrom schema.ts - Deploy Client β API
- Drop trigger and old column:
DROP TRIGGER sync_display_name_trigger ON users; DROP FUNCTION sync_display_name(); ALTER TABLE users DROP COLUMN nickname; - Remove
Making a Column Optional
Change nickname from required to optional:
The safest approach is to treat this like a renameβcreate a new nullable column:
-
Add new nullable column with trigger
ALTER TABLE users ADD COLUMN nickname_v2 TEXT; -- nullable UPDATE users SET nickname_v2 = nickname; CREATE FUNCTION sync_nickname() RETURNS TRIGGER AS $$ BEGIN IF TG_OP = 'INSERT' THEN -- On insert, sync whichever column was provided IF NEW.nickname_v2 IS NULL AND NEW.nickname IS NOT NULL THEN NEW.nickname_v2 := NEW.nickname; ELSIF NEW.nickname IS NULL AND NEW.nickname_v2 IS NOT NULL THEN NEW.nickname := COALESCE(NEW.nickname_v2, ''); -- default for old clients END IF; ELSE -- UPDATE -- Sync whichever column changed IF NEW.nickname_v2 IS DISTINCT FROM OLD.nickname_v2 AND NEW.nickname IS NOT DISTINCT FROM OLD.nickname THEN NEW.nickname := COALESCE(NEW.nickname_v2, ''); -- default for old clients ELSIF NEW.nickname IS DISTINCT FROM OLD.nickname AND NEW.nickname_v2 IS NOT DISTINCT FROM OLD.nickname_v2 THEN NEW.nickname_v2 := NEW.nickname; END IF; END IF; RETURN NEW; END; $$ LANGUAGE plpgsql; CREATE TRIGGER sync_nickname_trigger BEFORE INSERT OR UPDATE ON users FOR EACH ROW EXECUTE FUNCTION sync_nickname();Wait for replication.
-
Deploy app using new column
- Add
nicknameV2to schema.ts asoptional() - Update app code to handle nulls
- Deploy API β Client
- Add
-
Remove old column
- Remove
nicknamefrom schema.ts - Rename
nickname_v2tonicknameif desired (another rename cycle), or keep the new name - Deploy Client β API
- Drop trigger and old column
- Remove
Quick Reference
| Change | Deploy Order |
|---|---|
| Add column/table | DB β (wait) β API β Client |
| Remove column/table | Client (maybe wait for app update) β API β DB |
| Add mutator/query | API β Client |
| Remove mutator/query | Client β API |
| Change mutator implementation | API only |
| Change mutator interface | Add mutator β Client β Remove mutator |
| Rename column/table | Add new + Migrate β Remove old |
Backfill
When you add a new column or table to your schema, initial data (from e.g., GENERATED, DEFAULT, CURRENT_TIMESTAMP, etc.) needs to be replicated to zero-cache and synced to clients.
Similarly, when adding an existing column to a custom publication, that column's existing data needs to be replicated.
Zero handles both these cases through a process called backfilling.
Zero backfills existing data to the replica in the background after detecting a new column. The new column is not exposed to the client until all data has been backfilled, which may take some time depending on the amount of data.
Monitoring Backfill Progress
To track backfill progress, check your zero-cache logs for messages about backfilling status.
If you're using Cloud Zero, backfill progress is displayed directly in the dashboard.