For AI assistants (Claude, Codex, Cursor, OpenCode, etc.)
ALWAYS readllms.txtfor curated documentation pages and examples.
Slow Queries
# Slow Queries
In the `zero-cache` logs, you may see statements indicating a query is slow:
```shell
hash=3rhuw19xt9vry transformationHash=1nv7ot74gxfl7
Slow query materialization 325.46865100000286
```
Or, you may just notice queries taking longer than expected in the UI. Here are some tips to help debug such slow queries.
## Query Plan
The `@rocicorp/zero` package ships with a CLI to help debug query plans. You can run it with:
```bash
# see all parameters
npx analyze-query --help
# analyze a specific query
npx analyze-query \
--schema-path="./schema.ts" \
--replica-file="./zero.db" \
--query='albums.where("artistId", "artist_1").orderBy("createdAt", "asc").limit(10)'
```
This command will output the query plan and time to execute each phase of that plan:
```bash
$ npx analyze-query \
--schema-path="./schema.ts" \
--replica-file="./zero.db" \
--query='albums.where("artistId", "artist_1").orderBy("createdAt", "asc").limit(10)'
Loading schema from ./schema.ts
=== Query Stats: ===
total synced rows: 10
albums vended: {
'SELECT "id","title","artist_id","release_year","cover_art_url","created_at","_0_version" FROM "albums" WHERE "artist_id" = ? ORDER BY "created_at" asc, "id" asc': 10
}
Rows Read (into JS): 10
time: 3.12ms ms
=== Rows Scanned (by SQLite): ===
albums: {
'SELECT "id","title","artist_id","release_year","cover_art_url","created_at","_0_version" FROM "albums" WHERE "artist_id" = ? ORDER BY "created_at" asc, "id" asc': 25
}
total rows scanned: 25
=== Query Plans: ===
query SELECT "id","title","artist_id","release_year","cover_art_url","created_at","_0_version" FROM "albums" WHERE "artist_id" = ? ORDER BY "created_at" asc, "id" asc
SCAN albums
USE TEMP B-TREE FOR ORDER BY
```
Ideally, run this command on the server where your `zero.db` replica file is located, so it uses the same disk as `zero-cache`. Adjust the `--schema-path` to point to your [schema](https://zero.rocicorp.dev/docs/schema) file (you may need to copy this onto the server). The `--query` arg is the ZQL query you want to analyze.
Running locally, the analyzer will use any local `.env` file to find your environment configuration (so you don't need to manually provide the replica file).
### Optimizing the Plan
You should look for any `TEMP B-TREE` entries in the query plan. These indicate that the query is not properly indexed in SQLite, and that `zero-cache` had to create a temporary index to satisfy the query. You should add appropriate indexes upstream to fix this.
> ⚠️ **Primary key ordering**: ZQL adds all primary key columns to the `orderBy` clause for a predictable total order, but only appends those PK columns which are not already present in the order of the query. This means that upstream indexes must also include the PK columns.
Feel free to share your query plans with us in [Discord](https://discord.rocicorp.dev) if you need help optimizing them.
## Check `ttl`
If you are seeing unexpected UI flicker when moving between views, it is possible that the queries backing these views have a `ttl` of `never`. Set the `ttl` to something like `5m` to [keep data cached across navigations](https://zero.rocicorp.dev/docs/queries#query-caching).
You may alternately want to [preload some data](https://zero.rocicorp.dev/docs/queries#running-queries) at app startup.
Conversely, if you are setting `ttl` to long values, then you may have many backgrounded queries running that the app is not using. You can see which queries are running using the [inspector](https://zero.rocicorp.dev/docs/debug/inspector). Ensure that only expected queries are running.
## Locality
If you see log lines like:
```shell
flushed cvr ... (124ms)
```
this indicates that `zero-cache` is likely deployed too far away from your [CVR database](https://zero.rocicorp.dev/docs/deployment#architecture). If you did not configure a CVR database URL then this will be your product's Postgres DB. A slow CVR flush can slow down Zero, since it must complete the flush before sending query result(s) to clients.
Try moving `zero-cache` to be deployed as close as possible to the CVR database.
## Check Storage
`zero-cache` is effectively a database. It requires fast (low latency and high bandwidth) disk access to perform well. If you're running on network attached storage with high latency, or on AWS with low IOPS, then this is the most likely culprit.
Some hosting providers scale IOPS with vCPU. Increasing the vCPU will increase storage throughput and likely resolve the issue.
Fly.io provides physically attached SSDs, even for their smallest VMs. Deploying zero-cache there (or any other provider that offers physically attached SSDs) is another option.
## /statz
`zero-cache` makes some internal health statistics available via the `/statz` endpoint of `zero-cache`. In order to access this, you must configure an [admin password](https://zero.rocicorp.dev/docs/zero-cache-config#admin-password).
Or, for AI assistants
ALWAYS readllms.txtfor curated documentation pages and examples.
In the zero-cache logs, you may see statements indicating a query is slow:
The @rocicorp/zero package ships with a CLI to help debug query plans. You can run it with:
# see all parametersnpx analyze-query --help# analyze a specific querynpx analyze-query \ --schema-path="./schema.ts" \ --replica-file="./zero.db" \ --query='albums.where("artistId", "artist_1").orderBy("createdAt", "asc").limit(10)'
This command will output the query plan and time to execute each phase of that plan:
$ npx analyze-query \ --schema-path="./schema.ts" \ --replica-file="./zero.db" \ --query='albums.where("artistId", "artist_1").orderBy("createdAt", "asc").limit(10)'Loading schema from ./schema.ts=== Query Stats: ===total synced rows: 10albums vended: { 'SELECT "id","title","artist_id","release_year","cover_art_url","created_at","_0_version" FROM "albums" WHERE "artist_id" = ? ORDER BY "created_at" asc, "id" asc': 10}Rows Read (into JS): 10time: 3.12ms ms=== Rows Scanned (by SQLite): ===albums: { 'SELECT "id","title","artist_id","release_year","cover_art_url","created_at","_0_version" FROM "albums" WHERE "artist_id" = ? ORDER BY "created_at" asc, "id" asc': 25}total rows scanned: 25=== Query Plans: ===query SELECT "id","title","artist_id","release_year","cover_art_url","created_at","_0_version" FROM "albums" WHERE "artist_id" = ? ORDER BY "created_at" asc, "id" ascSCAN albumsUSE TEMP B-TREE FOR ORDER BY
Ideally, run this command on the server where your zero.db replica file is located, so it uses the same disk as zero-cache. Adjust the --schema-path to point to your schema file (you may need to copy this onto the server). The --query arg is the ZQL query you want to analyze.
Running locally, the analyzer will use any local .env file to find your environment configuration (so you don't need to manually provide the replica file).
You should look for any TEMP B-TREE entries in the query plan. These indicate that the query is not properly indexed in SQLite, and that zero-cache had to create a temporary index to satisfy the query. You should add appropriate indexes upstream to fix this.
If you are seeing unexpected UI flicker when moving between views, it is possible that the queries backing these views have a ttl of never. Set the ttl to something like 5m to keep data cached across navigations.
Conversely, if you are setting ttl to long values, then you may have many backgrounded queries running that the app is not using. You can see which queries are running using the inspector. Ensure that only expected queries are running.
this indicates that zero-cache is likely deployed too far away from your CVR database. If you did not configure a CVR database URL then this will be your product's Postgres DB. A slow CVR flush can slow down Zero, since it must complete the flush before sending query result(s) to clients.
Try moving zero-cache to be deployed as close as possible to the CVR database.
zero-cache is effectively a database. It requires fast (low latency and high bandwidth) disk access to perform well. If you're running on network attached storage with high latency, or on AWS with low IOPS, then this is the most likely culprit.
Some hosting providers scale IOPS with vCPU. Increasing the vCPU will increase storage throughput and likely resolve the issue.
Fly.io provides physically attached SSDs, even for their smallest VMs. Deploying zero-cache there (or any other provider that offers physically attached SSDs) is another option.
zero-cache makes some internal health statistics available via the /statz endpoint of zero-cache. In order to access this, you must configure an admin password.