Skip to content

Conversation

@BilalG1
Copy link
Contributor

@BilalG1 BilalG1 commented Nov 26, 2025

Summary by CodeRabbit

  • New Features

    • ClickHouse analytics backend for event storage and querying
    • Analytics query API with timing stats, mapped errors, and admin-only migration endpoints
    • Dashboard: interactive analytics query editor (ClickHouse SQL) and admin UI to migrate/backfill events with progress reporting
  • Infrastructure

    • Docker Compose, CI steps and local scripts added to run ClickHouse; new env vars and run script support
  • Tests

    • End-to-end tests for analytics queries and event retrieval; snapshot redactions added
  • Docs / Errors

    • New user-facing analytics error types for timeouts and query failures

✏️ Tip: You can customize this high-level summary in your review settings.


Note

Add ClickHouse-backed analytics: readonly query API with timing, dual-write event storage and migration, dashboard query editor/migration UI, and local/CI setup.

  • Backend (Analytics + ClickHouse):
    • ClickHouse integration: New @/lib/clickhouse client, env vars, and migrations creating analytics.events; row policy and restricted limited_user grants.
    • Event ingestion: Dual-write backend events to analytics.events (src/lib/events.tsx).
    • Query API: New admin endpoint POST /api/v1/internal/analytics/query executing readonly, tenant-scoped ClickHouse queries and returning results + CPU/wall times; adds KnownErrors for analytics.
    • Migration API: Internal endpoint to backfill events from Postgres to ClickHouse with cursor/limits; wired into DB migration script.
  • Dashboard:
    • Analytics Query UI: Page with ClickHouse SQL editor (Monaco, autocomplete) and JSON results.
    • ClickHouse Migration UI: Internal page to run batched backfills with progress/cursor.
  • Shared/SDK:
    • Admin interface additions (queryAnalytics, ClickHouse migration) and analytics types.
  • DevOps:
    • Docker Compose/CI add ClickHouse services; .env examples and clickhouse scripts; Dev Launchpad entries.
  • Tests:
    • E2E coverage for analytics query behavior, permissions/limits, event visibility; snapshot serializer redacts timing fields.

Written by Cursor Bugbot for commit 1134883. This will update automatically on new commits. Configure here.

@vercel
Copy link

vercel bot commented Nov 26, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Review Updated (UTC)
stack-backend Error Error Dec 20, 2025 3:33am
stack-dashboard Ready Ready Preview, Comment Dec 20, 2025 3:33am
stack-demo Ready Ready Preview, Comment Dec 20, 2025 3:33am
stack-docs Error Error Dec 20, 2025 3:33am

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 26, 2025

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Walkthrough

Adds ClickHouse integration across infra and app layers: client library, migrations, admin/external clients, dual-write event logging, analytics query and events-migration internal APIs, dashboard UIs (query + migration), Docker/GHA changes, types/errors, and E2E tests with snapshot adjustments.

Changes

Cohort / File(s) Summary
Infrastructure & Env
apps/backend/.env, apps/backend/.env.development, docker/server/.env.example, docker/dependencies/docker.compose.yaml, .github/workflows/docker-server-build-run.yaml, apps/dev-launchpad/public/index.html, package.json, package.json
Added ClickHouse env vars; Docker Compose service & volume for ClickHouse; adjusted drizzle-gateway port; GitHub Actions step to start ClickHouse; launchpad entries; new top-level and backend clickhouse scripts.
Backend Dependency & Migration Scripts
apps/backend/package.json, apps/backend/scripts/clickhouse-migrations.ts, apps/backend/scripts/db-migrations.ts
Added @clickhouse/client dependency; new migrations script to create DB/table/user/policies/views and integrated runClickhouseMigrations into DB migration flow.
ClickHouse Client & Utilities
apps/backend/src/lib/clickhouse.tsx
New factory createClickhouseClient(authType, database), exported clickhouseAdminClient and clickhouseExternalClient, and getQueryTimingStats which queries system.query_log after flushing logs.
Analytics Query API
apps/backend/src/app/api/latest/internal/analytics/query/route.ts
New POST route for admin-only parameterized ClickHouse queries with enforced settings/limits/timeouts, error mapping to KnownErrors, and timing stats in response.
Event Migration API
apps/backend/src/app/api/latest/internal/clickhouse/migrate-events/route.tsx
New internal POST route to backfill Postgres events into ClickHouse with cursor-based paging, JSONEachRow bulk insert (async_insert), and progress/next_cursor reporting.
Event Logging & Tokens
apps/backend/src/lib/events.tsx, apps/backend/src/lib/tokens.tsx
Dual-write events to ClickHouse (JSONEachRow async_insert) including timing fields (is_wide, event_started_at, event_ended_at); added teamId in event schema and included teamId in token-generated logging payload.
Dashboard — Query UI
apps/dashboard/src/app/(main)/(protected)/projects/[projectId]/query-analytics/*, apps/dashboard/src/app/(main)/(protected)/projects/[projectId]/query-analytics/page.tsx, apps/dashboard/src/app/(main)/(protected)/projects/[projectId]/query-analytics/monaco-clickhouse.ts
New Monaco-based ClickHouse SQL editor with language configuration, autocomplete, run shortcut, result display, and integration with adminApp.queryAnalytics.
Dashboard — Migration UI
apps/dashboard/src/app/(main)/(protected)/projects/[projectId]/clickhouse-migration/*, apps/dashboard/src/app/(main)/(protected)/projects/[projectId]/clickhouse-migration/page.tsx
New migration UI component and page to run batched migrations, manage cursor, show progress/stats, and control start/stop/reset flows.
Types & Admin Interface
packages/stack-shared/src/interface/crud/analytics.ts, packages/stack-shared/src/interface/admin-interface.ts, packages/template/src/lib/stack-app/apps/interfaces/admin-app.ts, packages/template/src/lib/stack-app/apps/implementations/admin-app-impl.ts
Added AnalyticsQueryOptions/AnalyticsQueryResponse, ClickhouseMigrationRequest/ClickhouseMigrationResponse types, and admin interface methods queryAnalytics and migrateEventsToClickhouse (plus pass-through implementation).
Known Errors
packages/stack-shared/src/known-errors.tsx
Added AnalyticsQueryTimeout and AnalyticsQueryError KnownError constructors and included them in the KnownErrors registry/type.
E2E Tests & Utilities
apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts, apps/e2e/tests/backend/endpoints/api/v1/analytics-events.test.ts, apps/e2e/tests/snapshot-serializer.ts, apps/e2e/tests/backend/backend-helpers.ts
Added comprehensive analytics query and events E2E tests, redacted timing fields in snapshot serializer, and tightened camelCase detection in test helpers.

Sequence Diagram(s)

sequenceDiagram
    actor User
    participant Dashboard as Dashboard (React)
    participant Backend as Backend API
    participant ClickHouse as ClickHouse
    participant QueryLog as system.query_log

    User->>Dashboard: Run SQL query (query, params, timeout_ms)
    Dashboard->>Backend: POST /internal/analytics/query
    Backend->>Backend: Authenticate, generate query_id
    Backend->>ClickHouse: Execute parameterized query (settings: project/branch, timeout, limits)
    ClickHouse-->>Backend: JSON result rows
    Backend->>ClickHouse: SYSTEM FLUSH LOGS (admin)
    Backend->>QueryLog: SELECT cpu_time_ms, wall_clock_time_ms WHERE query_id=...
    QueryLog-->>Backend: timing stats
    Backend-->>Dashboard: { result, timing }
    Dashboard->>User: Display results + timing
Loading
sequenceDiagram
    actor User
    participant Dashboard as Migration UI
    participant Backend as Backend API
    participant Postgres as Postgres
    participant ClickHouse as ClickHouse

    User->>Dashboard: Start migration (min,max,cursor,limit)
    loop per batch
        Dashboard->>Backend: POST /internal/clickhouse/migrate-events
        Backend->>Postgres: SELECT events WHERE created_at BETWEEN range ORDER BY created_at,id LIMIT N
        Postgres-->>Backend: rows
        Backend->>Backend: Transform rows → ClickHouse JSONEachRow
        Backend->>ClickHouse: INSERT INTO analytics_internal.events (JSONEachRow, async_insert)
        ClickHouse-->>Backend: insert ack
        Backend-->>Dashboard: { processed_events, inserted_rows, progress, next_cursor }
        Dashboard->>User: Update progress
    end
    Dashboard->>User: Migration complete
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Areas to focus during review:

  • apps/backend/src/app/api/latest/internal/analytics/query/route.ts — sandboxing, settings, error mapping, timeout handling, result limits.
  • apps/backend/src/app/api/latest/internal/clickhouse/migrate-events/route.tsx — cursor correctness, ordering, idempotency, progress calculation, bulk insert handling.
  • apps/backend/src/lib/events.tsx — dual-write ordering, data shape, timing fields (is_wide/event_started_at/event_ended_at), teamId defaults.
  • apps/backend/src/lib/clickhouse.tsx — credential selection (admin vs external), query_log assumptions and parsing.
  • E2E tests & snapshot serializer — flakiness from timings and retry logic.

Possibly related PRs

Suggested reviewers

  • N2D4

Poem

🐰
I hopped to the logs with whiskers bright,
ClickHouse opened columns through the night.
From Postgres burrows the events leap free,
Dashboards hum, queries nibble data with glee.
Nibble, count, report — analytics delight! 🥕

Pre-merge checks and finishing touches

❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 9.09% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Title check ❓ Inconclusive The title 'clickhouse setup' is vague and generic, using non-descriptive terms that don't convey the full scope of the comprehensive ClickHouse integration changes. Consider a more specific title like 'Add ClickHouse-backed analytics with query API, event ingestion, and dashboard UI' to better summarize the main scope of changes.
✅ Passed checks (1 passed)
Check name Status Explanation
Description check ✅ Passed The pull request description comprehensively documents the ClickHouse integration across backend, dashboard, DevOps, and tests with clear section organization and implementation details.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch clickhouse-setup

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Nov 26, 2025

Greptile Overview

Greptile Summary

This PR establishes the foundational ClickHouse integration for analytics, adding database infrastructure, client library, API endpoint, and comprehensive security controls.

Major Changes:

  • Added ClickHouse service to Docker Compose with proper volume management and port configuration
  • Implemented two-tier authentication system: admin user for migrations/system operations and limited_user for query execution with restricted permissions
  • Created /api/latest/analytics/query endpoint with SmartRouteHandler that enforces server-level auth, parameterized queries, configurable timeouts, and tenancy isolation
  • Added migration script that creates limited_user with SELECT-only permissions (references non-existent allowed_table1)
  • Implemented query timing stats retrieval using admin privilege escalation to access system.query_log
  • Added comprehensive E2E tests covering security boundaries (no CREATE, INSERT, or system table access)

Issues Found:

  • Migration grants SELECT on non-existent analytics.allowed_table1, which will cause migration failure
  • Missing fallback handling in getQueryTimingStats when query doesn't appear in query_log (will throw runtime error accessing undefined)

Confidence Score: 2/5

  • This PR has critical issues that will cause migration failures in production
  • Score reflects two critical bugs: the migration references a non-existent table which will fail on execution, and missing error handling in stats retrieval that will cause runtime crashes. The security model and API design are solid, but the implementation has blocking issues.
  • Pay close attention to apps/backend/scripts/clickhouse-migrations.ts (references non-existent table) and apps/backend/src/lib/clickhouse.tsx (missing error handling)

Important Files Changed

File Analysis

Filename Score Overview
apps/backend/scripts/clickhouse-migrations.ts 2/5 Added ClickHouse migration script with user creation and permissions, but references non-existent table
apps/backend/src/lib/clickhouse.tsx 3/5 Implemented ClickHouse client factory and query timing stats retrieval with admin escalation
apps/backend/src/app/api/latest/analytics/query/route.tsx 4/5 Added analytics query endpoint with proper auth, timeout handling, and error management

Sequence Diagram

sequenceDiagram
    participant Client
    participant AnalyticsAPI
    participant ClickHouse
    participant AdminConnection

    Client->>AnalyticsAPI: POST analytics query
    AnalyticsAPI->>ClickHouse: Execute with limited user
    
    alt Success
        ClickHouse-->>AnalyticsAPI: Result data
        AnalyticsAPI->>AdminConnection: Get timing stats
        AdminConnection-->>AnalyticsAPI: Stats
        AnalyticsAPI-->>Client: Result with stats
    else Timeout or Error
        ClickHouse-->>AnalyticsAPI: Error
        AnalyticsAPI-->>Client: Error response
    end
Loading

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

11 files reviewed, 2 comments

Edit Code Review Agent Settings | Greptile

query_params: { clickhouseExternalPassword },
});
const queries = [
"GRANT SELECT ON analytics.allowed_table1 TO limited_user;",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logic: granting SELECT on analytics.allowed_table1 which doesn't exist yet - this will cause migration to fail

Suggested change
"GRANT SELECT ON analytics.allowed_table1 TO limited_user;",
// "GRANT SELECT ON analytics.allowed_table1 TO limited_user;",
Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/backend/scripts/clickhouse-migrations.ts
Line: 14:14

Comment:
**logic:** granting SELECT on `analytics.allowed_table1` which doesn't exist yet - this will cause migration to fail

```suggestion
    // "GRANT SELECT ON analytics.allowed_table1 TO limited_user;",
```

How can I resolve this? If you propose a fix, please make it concise.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🧹 Nitpick comments (3)
apps/backend/scripts/clickhouse-migrations.ts (1)

9-12: Using plaintext password authentication.

While plaintext_password is acceptable for development environments, consider documenting this choice or using sha256_password for better security practices.

Example with SHA256:

import { createHash } from 'crypto';

const passwordHash = createHash('sha256')
  .update(clickhouseExternalPassword)
  .digest('hex');

await client.exec({
  query: "CREATE USER IF NOT EXISTS limited_user IDENTIFIED WITH sha256_password BY {passwordHash:String}",
  query_params: { passwordHash },
});
apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts (2)

1-29: Good test coverage for basic query execution.

The tests appropriately use inline snapshots as per coding guidelines. Consider adding a negative test case verifying that client accessType is rejected, since the route requires serverOrHigherAuthTypeSchema.


274-387: Security tests are comprehensive.

Good coverage of dangerous operations (CREATE TABLE, system tables, KILL QUERY, INSERT). Note that error messages expose the internal username limited_user - consider whether this level of detail should be sanitized in the route handler before returning to clients.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2cb8806 and 0c08896.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (11)
  • apps/backend/.env.development (1 hunks)
  • apps/backend/package.json (1 hunks)
  • apps/backend/scripts/clickhouse-migrations.ts (1 hunks)
  • apps/backend/scripts/db-migrations.ts (2 hunks)
  • apps/backend/src/app/api/latest/analytics/query/route.tsx (1 hunks)
  • apps/backend/src/lib/clickhouse.tsx (1 hunks)
  • apps/dev-launchpad/public/index.html (2 hunks)
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts (1 hunks)
  • apps/e2e/tests/snapshot-serializer.ts (1 hunks)
  • docker/dependencies/docker.compose.yaml (2 hunks)
  • packages/stack-shared/src/known-errors.tsx (2 hunks)
🧰 Additional context used
📓 Path-based instructions (5)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{ts,tsx}: Ensure code passes pnpm typecheck for TypeScript type validation
Never use toast for blocking alerts and errors; use alerts instead as they are more visible to users
Keep hover/click transitions snappy and fast; apply transitions after the action (e.g., smooth fade-out on hover end) rather than delaying actions with pre-transitions to avoid sluggish UI
Never use try-catch-all, never void a promise, never use .catch(console.error) or similar; use runAsynchronously or runAsynchronouslyWithAlert for error handling instead of try-catch blocks
Use loading indicators instead of try-catch blocks for asynchronous UI operations; button components like support async callbacks with built-in loading state
Use ES6 maps instead of records wherever possible

Files:

  • apps/backend/scripts/clickhouse-migrations.ts
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
  • apps/e2e/tests/snapshot-serializer.ts
  • apps/backend/src/app/api/latest/analytics/query/route.tsx
  • apps/backend/src/lib/clickhouse.tsx
  • packages/stack-shared/src/known-errors.tsx
  • apps/backend/scripts/db-migrations.ts
apps/**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

Prefer client components and dynamic functions like usePathname over Next.js dynamic functions to keep pages static; avoid using await params or similar dynamic patterns

Files:

  • apps/backend/scripts/clickhouse-migrations.ts
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
  • apps/e2e/tests/snapshot-serializer.ts
  • apps/backend/src/app/api/latest/analytics/query/route.tsx
  • apps/backend/src/lib/clickhouse.tsx
  • apps/backend/scripts/db-migrations.ts
apps/e2e/**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

Always add new E2E tests when you change the API or SDK interface

Files:

  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
  • apps/e2e/tests/snapshot-serializer.ts
**/*.{test,spec}.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{test,spec}.{ts,tsx}: Run all tests using pnpm test run; run specific tests using pnpm test run <file-filters> with Vitest
Use .toMatchInlineSnapshot for test assertions instead of other selectors when possible; check snapshot-serializer.ts for formatting and non-deterministic value handling

Files:

  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
apps/backend/src/app/api/latest/**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

In the Stack Auth monorepo backend API, organize routes by resource type (auth, user management, team management, OAuth providers) in /apps/backend/src/app/api/latest/*

Files:

  • apps/backend/src/app/api/latest/analytics/query/route.tsx
🧠 Learnings (4)
📚 Learning: 2025-11-25T02:09:03.104Z
Learnt from: CR
Repo: stack-auth/stack-auth PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-25T02:09:03.104Z
Learning: Applies to packages/stack-shared/src/config/** : When making backwards-incompatible changes to the config schema, update the migration functions in `packages/stack-shared/src/config/schema.ts`

Applied to files:

  • apps/backend/scripts/clickhouse-migrations.ts
  • apps/backend/scripts/db-migrations.ts
📚 Learning: 2025-11-25T02:09:03.104Z
Learnt from: CR
Repo: stack-auth/stack-auth PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-25T02:09:03.104Z
Learning: Applies to apps/e2e/**/*.{ts,tsx} : Always add new E2E tests when you change the API or SDK interface

Applied to files:

  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
📚 Learning: 2025-11-25T02:09:03.104Z
Learnt from: CR
Repo: stack-auth/stack-auth PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-25T02:09:03.104Z
Learning: Applies to **/*.{test,spec}.{ts,tsx} : Use `.toMatchInlineSnapshot` for test assertions instead of other selectors when possible; check snapshot-serializer.ts for formatting and non-deterministic value handling

Applied to files:

  • apps/e2e/tests/snapshot-serializer.ts
📚 Learning: 2025-11-25T02:09:03.104Z
Learnt from: CR
Repo: stack-auth/stack-auth PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-25T02:09:03.104Z
Learning: Applies to apps/backend/src/app/api/latest/**/*.{ts,tsx} : In the Stack Auth monorepo backend API, organize routes by resource type (auth, user management, team management, OAuth providers) in `/apps/backend/src/app/api/latest/*`

Applied to files:

  • apps/backend/src/app/api/latest/analytics/query/route.tsx
🧬 Code graph analysis (3)
apps/backend/scripts/clickhouse-migrations.ts (2)
apps/backend/src/lib/clickhouse.tsx (1)
  • createClickhouseClient (11-19)
packages/stack-shared/src/utils/env.tsx (1)
  • getEnvVariable (16-58)
packages/stack-shared/src/known-errors.tsx (2)
packages/stack-shared/src/index.ts (1)
  • KnownError (14-14)
packages/stack-shared/src/utils/results.tsx (1)
  • error (36-41)
apps/backend/scripts/db-migrations.ts (1)
apps/backend/scripts/clickhouse-migrations.ts (1)
  • runClickhouseMigrations (4-24)
🪛 dotenv-linter (4.0.0)
apps/backend/.env.development

[warning] 75-75: [SubstitutionKey] The CLICKHOUSE_URL key is not assigned properly

(SubstitutionKey)


[warning] 76-76: [UnorderedKey] The CLICKHOUSE_ADMIN_USER key should go before the CLICKHOUSE_URL key

(UnorderedKey)


[warning] 77-77: [UnorderedKey] The CLICKHOUSE_ADMIN_PASSWORD key should go before the CLICKHOUSE_ADMIN_USER key

(UnorderedKey)


[warning] 78-78: [UnorderedKey] The CLICKHOUSE_EXTERNAL_USER key should go before the CLICKHOUSE_URL key

(UnorderedKey)


[warning] 79-79: [UnorderedKey] The CLICKHOUSE_EXTERNAL_PASSWORD key should go before the CLICKHOUSE_EXTERNAL_USER key

(UnorderedKey)


[warning] 80-80: [UnorderedKey] The CLICKHOUSE_DATABASE key should go before the CLICKHOUSE_EXTERNAL_PASSWORD key

(UnorderedKey)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Vercel Agent Review
🔇 Additional comments (12)
apps/e2e/tests/snapshot-serializer.ts (1)

71-72: LGTM!

Adding timing fields to stripFields is appropriate for test stability, as these values are non-deterministic and would cause snapshot tests to fail.

apps/backend/.env.development (1)

73-80: LGTM!

The ClickHouse environment variables follow the established patterns in this file. The port interpolation and password placeholder conventions are consistent with other services.

Note: The static analysis warnings about SubstitutionKey and UnorderedKey can be safely ignored—these are false positives given the standard patterns used throughout this .env file.

apps/dev-launchpad/public/index.html (2)

132-132: LGTM!

The ClickHouse background service entry is consistent with other services and correctly uses port suffix 33.


304-312: LGTM!

The ClickHouse app tile follows the established pattern and correctly references port suffix 33, matching the Docker service configuration.

apps/backend/scripts/db-migrations.ts (2)

11-11: LGTM!

The import is correctly structured.


155-156: LGTM!

Running ClickHouse migrations after Prisma migrations complete is the correct sequencing. The async/await usage is proper.

apps/backend/src/lib/clickhouse.tsx (1)

11-19: LGTM!

The client factory function correctly handles both admin and external authentication modes with optional timeout configuration.

docker/dependencies/docker.compose.yaml (1)

225-241: ClickHouse 25.10 is a stable release (verified Oct 31, 2025), and a patch update (25.10.1) is also available.

The image version specified in the docker-compose configuration is current, well-supported, and appropriate for use. No concerns with the version choice or service configuration.

apps/backend/package.json (1)

46-46: @clickhouse/client version 1.14.0 is current and has no known security vulnerabilities.

Version 1.14.0 is the latest stable release (released November 20, 2025), and the ^1.14.0 specifier is appropriate. No known security vulnerabilities were identified for the @clickhouse/client npm package.

apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts (1)

217-244: LGTM!

The timeout test using SELECT sleep(3) with a 100ms timeout is a reliable way to verify timeout handling.

packages/stack-shared/src/known-errors.tsx (2)

1632-1652: LGTM!

The new AnalyticsQueryTimeout and AnalyticsQueryError constructors follow the established pattern for known errors. The HTTP 400 status is appropriate, and the constructorArgsFromJson implementations correctly deserialize the error details.


1784-1785: LGTM!

The new error types are properly exported in the KnownErrors object.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
apps/backend/scripts/clickhouse-migrations.ts (1)

4-23: Migration GRANT targets a likely non-existent table and hardcodes the external user

This script still has the two issues called out in earlier reviews:

  • GRANT SELECT ON analytics.allowed_table1 TO limited_user; will fail if analytics.allowed_table1 doesn’t exist yet (which seems likely given the TODO about creating migration files).
  • The external user name is hardcoded as limited_user, even though STACK_CLICKHOUSE_EXTERNAL_USER exists and is used elsewhere; that can drift.

You could make this more robust along these lines:

 export async function runClickhouseMigrations() {
   console.log("Running Clickhouse migrations...");
   const client = createClickhouseClient("admin");
-  const clickhouseExternalPassword = getEnvVariable("STACK_CLICKHOUSE_EXTERNAL_PASSWORD");
-  // todo: create migration files
-  await client.exec({
-    query: "CREATE USER IF NOT EXISTS limited_user IDENTIFIED WITH plaintext_password BY {clickhouseExternalPassword:String}",
-    query_params: { clickhouseExternalPassword },
-  });
-  const queries = [
-    "GRANT SELECT ON analytics.allowed_table1 TO limited_user;",
-    "REVOKE ALL ON system.* FROM limited_user;",
-    "REVOKE CREATE, ALTER, DROP, INSERT ON *.* FROM limited_user;"
-  ];
-  for (const query of queries) {
-    console.log(query);
-    await client.exec({ query });
-  }
-  console.log("Clickhouse migrations complete");
-  await client.close();
+  const externalUser = getEnvVariable("STACK_CLICKHOUSE_EXTERNAL_USER", "limited_user");
+  const clickhouseExternalPassword = getEnvVariable("STACK_CLICKHOUSE_EXTERNAL_PASSWORD");
+
+  try {
+    // TODO: create proper ClickHouse schema migrations
+    await client.exec({
+      query: "CREATE USER IF NOT EXISTS {externalUser:Identifier} IDENTIFIED WITH plaintext_password BY {clickhouseExternalPassword:String}",
+      query_params: { externalUser, clickhouseExternalPassword },
+    });
+
+    const queries = [
+      // Enable GRANTs once the target tables exist.
+      // "GRANT SELECT ON analytics.some_table TO {externalUser:Identifier};",
+      `REVOKE ALL ON system.* FROM ${externalUser};`,
+      `REVOKE CREATE, ALTER, DROP, INSERT ON *.* FROM ${externalUser};`,
+    ];
+
+    for (const query of queries) {
+      console.log(query);
+      await client.exec({ query });
+    }
+
+    console.log("Clickhouse migrations complete");
+  } finally {
+    await client.close();
+  }
 }

Adjust the GRANT line once your analytics tables are in place.

apps/backend/src/lib/clickhouse.tsx (1)

22-55: Fix getQueryTimingStats typing and guard against missing query_log rows

This function still has the issues called out in earlier reviews:

  • profile.json is typed as if it returns a flat { cpu_time_ms, wall_clock_time_ms }, but you then access stats.data[0]; this doesn’t match the ClickHouse JSON shape and will likely fail type-checking.
  • At runtime, system.query_log can legitimately return an empty data array (log entry not flushed yet), making stats.data[0] undefined and crashing callers that assume stats are always present.

A safer version would be:

-export const getQueryTimingStats = async (client: ClickHouseClient, queryId: string) => {
+export const getQueryTimingStats = async (
+  client: ClickHouseClient,
+  queryId: string,
+): Promise<{ cpu_time_ms: number; wall_clock_time_ms: number }> => {
@@
-  const stats = await profile.json<{
-    cpu_time_ms: number,
-    wall_clock_time_ms: number,
-  }>();
-  return stats.data[0];
+  const stats = await profile.json<{
+    data: Array<{
+      cpu_time_ms: number;
+      wall_clock_time_ms: number;
+    }>;
+  }>();
+
+  if (!stats.data || stats.data.length === 0) {
+    throw new Error(
+      `Query timing stats not found for query_id=${queryId}; query_log entry may not be available yet.`,
+    );
+  }
+
+  return stats.data[0];
 };

Callers can then convert this error into a KnownErrors.AnalyticsQueryError or similar, as appropriate for the API surface.

🧹 Nitpick comments (2)
.github/workflows/docker-server-test.yaml (1)

26-30: Consider adding a lightweight readiness check for ClickHouse

The container may not always be ready within 5 seconds, especially on a busy CI runner. A transient slow start would make this job flaky.

Consider replacing the fixed sleep with a small readiness loop, for example:

      - name: Setup clickhouse
        run: |
          docker run -d --name clickhouse \
            -e CLICKHOUSE_DB=analytics \
            -e CLICKHOUSE_USER=stackframe \
            -e CLICKHOUSE_PASSWORD=password \
            -p 8133:8123 clickhouse/clickhouse-server:25.10

          for i in {1..30}; do
            if docker exec clickhouse clickhouse-client --query "SELECT 1" >/dev/null 2>&1; then
              echo "ClickHouse is ready"
              break
            fi
            echo "Waiting for ClickHouse..."
            sleep 1
          done

          docker logs clickhouse

</blockquote></details>
<details>
<summary>apps/backend/.env.development (1)</summary><blockquote>

`74-80`: **ClickHouse env block looks consistent with backend usage**

The new `STACK_CLICKHOUSE_*` variables line up with `createClickhouseClient` and the CI ClickHouse setup (DB `analytics`, admin user `stackframe`, external user `limited_user`), so this should wire up cleanly for local dev.

The `dotenv-linter` warnings here (substitution and key ordering) are stylistic only; the same `${NEXT_PUBLIC_STACK_PORT_PREFIX:-81}` pattern is already used above. You can reorder the keys if you want to silence the linter, but it’s not functionally required.

</blockquote></details>

</blockquote></details>

<details>
<summary>📜 Review details</summary>

**Configuration used**: CodeRabbit UI

**Review profile**: CHILL

**Plan**: Pro

<details>
<summary>📥 Commits</summary>

Reviewing files that changed from the base of the PR and between 0c088962934cdf829c9e6b282d2fa5a4e07723a3 and 41287db62997e2a54a1084037a90136517fcbbe5.

</details>

<details>
<summary>📒 Files selected for processing (5)</summary>

* `.github/workflows/docker-server-test.yaml` (1 hunks)
* `apps/backend/.env.development` (1 hunks)
* `apps/backend/scripts/clickhouse-migrations.ts` (1 hunks)
* `apps/backend/src/lib/clickhouse.tsx` (1 hunks)
* `docker/server/.env.example` (1 hunks)

</details>

<details>
<summary>🧰 Additional context used</summary>

<details>
<summary>📓 Path-based instructions (2)</summary>

<details>
<summary>**/*.{ts,tsx}</summary>


**📄 CodeRabbit inference engine (AGENTS.md)**

> `**/*.{ts,tsx}`: Ensure code passes `pnpm typecheck` for TypeScript type validation
> Never use `toast` for blocking alerts and errors; use alerts instead as they are more visible to users
> Keep hover/click transitions snappy and fast; apply transitions after the action (e.g., smooth fade-out on hover end) rather than delaying actions with pre-transitions to avoid sluggish UI
> Never use try-catch-all, never void a promise, never use .catch(console.error) or similar; use `runAsynchronously` or `runAsynchronouslyWithAlert` for error handling instead of try-catch blocks
> Use loading indicators instead of try-catch blocks for asynchronous UI operations; button components like <Button> support async callbacks with built-in loading state
> Use ES6 maps instead of records wherever possible

Files:
- `apps/backend/src/lib/clickhouse.tsx`
- `apps/backend/scripts/clickhouse-migrations.ts`

</details>
<details>
<summary>apps/**/*.{ts,tsx}</summary>


**📄 CodeRabbit inference engine (AGENTS.md)**

> Prefer client components and dynamic functions like `usePathname` over Next.js dynamic functions to keep pages static; avoid using `await params` or similar dynamic patterns

Files:
- `apps/backend/src/lib/clickhouse.tsx`
- `apps/backend/scripts/clickhouse-migrations.ts`

</details>

</details><details>
<summary>🧠 Learnings (1)</summary>

<details>
<summary>📚 Learning: 2025-11-25T02:09:03.104Z</summary>

Learnt from: CR
Repo: stack-auth/stack-auth PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-25T02:09:03.104Z
Learning: Applies to packages/stack-shared/src/config/** : When making backwards-incompatible changes to the config schema, update the migration functions in packages/stack-shared/src/config/schema.ts


**Applied to files:**
- `apps/backend/scripts/clickhouse-migrations.ts`

</details>

</details><details>
<summary>🧬 Code graph analysis (2)</summary>

<details>
<summary>apps/backend/src/lib/clickhouse.tsx (1)</summary><blockquote>

<details>
<summary>packages/stack-shared/src/utils/env.tsx (1)</summary>

* `getEnvVariable` (16-58)

</details>

</blockquote></details>
<details>
<summary>apps/backend/scripts/clickhouse-migrations.ts (2)</summary><blockquote>

<details>
<summary>apps/backend/src/lib/clickhouse.tsx (1)</summary>

* `createClickhouseClient` (11-19)

</details>
<details>
<summary>packages/stack-shared/src/utils/env.tsx (1)</summary>

* `getEnvVariable` (16-58)

</details>

</blockquote></details>

</details><details>
<summary>🪛 dotenv-linter (4.0.0)</summary>

<details>
<summary>docker/server/.env.example</summary>

[warning] 20-20: [UnorderedKey] The STACK_CLICKHOUSE_ADMIN_PASSWORD key should go before the STACK_CLICKHOUSE_URL key

(UnorderedKey)

---

[warning] 21-21: [UnorderedKey] The STACK_CLICKHOUSE_EXTERNAL_PASSWORD key should go before the STACK_CLICKHOUSE_URL key

(UnorderedKey)

</details>
<details>
<summary>apps/backend/.env.development</summary>

[warning] 75-75: [SubstitutionKey] The STACK_CLICKHOUSE_URL key is not assigned properly

(SubstitutionKey)

---

[warning] 76-76: [UnorderedKey] The STACK_CLICKHOUSE_ADMIN_USER key should go before the STACK_CLICKHOUSE_URL key

(UnorderedKey)

---

[warning] 77-77: [UnorderedKey] The STACK_CLICKHOUSE_ADMIN_PASSWORD key should go before the STACK_CLICKHOUSE_ADMIN_USER key

(UnorderedKey)

---

[warning] 78-78: [UnorderedKey] The STACK_CLICKHOUSE_EXTERNAL_USER key should go before the STACK_CLICKHOUSE_URL key

(UnorderedKey)

---

[warning] 79-79: [UnorderedKey] The STACK_CLICKHOUSE_EXTERNAL_PASSWORD key should go before the STACK_CLICKHOUSE_EXTERNAL_USER key

(UnorderedKey)

---

[warning] 80-80: [UnorderedKey] The STACK_CLICKHOUSE_DATABASE key should go before the STACK_CLICKHOUSE_EXTERNAL_PASSWORD key

(UnorderedKey)

</details>

</details>

</details>

<details>
<summary>⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (11)</summary>

* GitHub Check: Vercel Agent Review
* GitHub Check: docker
* GitHub Check: restart-dev-and-test-with-custom-base-port
* GitHub Check: check_prisma_migrations (22.x)
* GitHub Check: setup-tests
* GitHub Check: build (22.x)
* GitHub Check: lint_and_build (latest)
* GitHub Check: all-good
* GitHub Check: build (22.x)
* GitHub Check: restart-dev-and-test
* GitHub Check: build (22.x)

</details>

</details>

<!-- This is an auto-generated comment by CodeRabbit for review status -->

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
docker/server/.env.example (1)

19-21: Reorder ClickHouse keys for consistency with dotenv-linter and env-var conventions.

The three ClickHouse environment variables should be in alphabetical order to match the Postgres connection strings pattern and resolve the dotenv-linter UnorderedKey warnings.

Apply this diff:

-STACK_CLICKHOUSE_URL=http://host.docker.internal:8133
 STACK_CLICKHOUSE_ADMIN_PASSWORD=password
 STACK_CLICKHOUSE_EXTERNAL_PASSWORD=password
+STACK_CLICKHOUSE_URL=http://host.docker.internal:8133
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 41287db and 1320221.

📒 Files selected for processing (1)
  • docker/server/.env.example (1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-11-25T02:09:03.104Z
Learnt from: CR
Repo: stack-auth/stack-auth PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-25T02:09:03.104Z
Learning: When making dashboard changes, provide deep links to the dashboard page in the format `http://localhost:<$NEXT_PUBLIC_STACK_PORT_PREFIX>01/projects/-selector-/...` (or use a.localhost, b.localhost, c.localhost for port prefixes 91, 92, 93)

Applied to files:

  • docker/server/.env.example
🪛 dotenv-linter (4.0.0)
docker/server/.env.example

[warning] 20-20: [UnorderedKey] The STACK_CLICKHOUSE_ADMIN_PASSWORD key should go before the STACK_CLICKHOUSE_URL key

(UnorderedKey)


[warning] 21-21: [UnorderedKey] The STACK_CLICKHOUSE_EXTERNAL_PASSWORD key should go before the STACK_CLICKHOUSE_URL key

(UnorderedKey)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (11)
  • GitHub Check: Vercel Agent Review
  • GitHub Check: lint_and_build (latest)
  • GitHub Check: restart-dev-and-test-with-custom-base-port
  • GitHub Check: check_prisma_migrations (22.x)
  • GitHub Check: build (22.x)
  • GitHub Check: build (22.x)
  • GitHub Check: setup-tests
  • GitHub Check: build (22.x)
  • GitHub Check: all-good
  • GitHub Check: restart-dev-and-test
  • GitHub Check: docker
🔇 Additional comments (1)
docker/server/.env.example (1)

19-21: ✓ Critical issue resolved: ClickHouse URL correctly targets host-exposed service.

The change from localhost:8133 to host.docker.internal:8133 ensures the server container correctly reaches the ClickHouse service. This aligns with the Postgres connection pattern and will allow ClickHouse operations to function properly in Docker environments.

@N2D4 N2D4 self-requested a review November 26, 2025 18:44
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
apps/backend/scripts/clickhouse-migrations.ts (2)

14-14: GRANT statement references non-existent table.

This grants SELECT on analytics.allowed_table1, which doesn't exist yet and will cause the migration to fail.

Comment or remove this line until the table is created in a prior migration step.


22-23: Move client cleanup to a finally block.

If any query fails before line 23, the client won't be closed, leaving the connection open.

Apply this diff to guarantee cleanup:

-  console.log("Clickhouse migrations complete");
-  await client.close();
+  try {
+    // existing migration logic here
+    console.log("Clickhouse migrations complete");
+  } finally {
+    await client.close();
+  }

Note: Per coding guidelines, avoid try-catch-all for error handling. Let errors propagate naturally while ensuring cleanup in the finally block.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0077d16 and 62c6646.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (2)
  • apps/backend/package.json (1 hunks)
  • apps/backend/scripts/clickhouse-migrations.ts (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{ts,tsx}: For blocking alerts and errors, never use toast, as they are easily missed by the user. Instead, use alerts
Keep hover/click transitions snappy and fast. Don't delay the action with a pre-transition; apply transitions after the action instead
NEVER try-catch-all, NEVER void a promise, and NEVER .catch(console.error). Use runAsynchronously or runAsynchronouslyWithAlert instead for asynchronous error handling
When creating hover transitions, avoid hover-enter transitions and just use hover-exit transitions (e.g., transition-colors hover:transition-none)
Use ES6 maps instead of records wherever possible

Files:

  • apps/backend/scripts/clickhouse-migrations.ts
🧠 Learnings (1)
📚 Learning: 2025-11-28T21:21:39.123Z
Learnt from: CR
Repo: stack-auth/stack-auth PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-28T21:21:39.123Z
Learning: Applies to packages/stack-shared/src/config/schema.ts : Whenever making backwards-incompatible changes to the config schema, update the migration functions in `packages/stack-shared/src/config/schema.ts`

Applied to files:

  • apps/backend/scripts/clickhouse-migrations.ts
🧬 Code graph analysis (1)
apps/backend/scripts/clickhouse-migrations.ts (2)
apps/backend/src/lib/clickhouse.tsx (1)
  • createClickhouseClient (11-19)
packages/stack-shared/src/utils/env.tsx (1)
  • getEnvVariable (16-58)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (11)
  • GitHub Check: Vercel Agent Review
  • GitHub Check: docker
  • GitHub Check: check_prisma_migrations (22.x)
  • GitHub Check: build (22.x)
  • GitHub Check: build (22.x)
  • GitHub Check: setup-tests
  • GitHub Check: restart-dev-and-test-with-custom-base-port
  • GitHub Check: lint_and_build (latest)
  • GitHub Check: restart-dev-and-test
  • GitHub Check: all-good
  • GitHub Check: build (22.x)
🔇 Additional comments (1)
apps/backend/package.json (1)

46-46: Version 1.14.0 is current and has no known vulnerabilities.

The @clickhouse/client package version 1.14.0 is the latest available on npm with no direct security vulnerabilities reported in Snyk or other vulnerability databases.

@cmux-agent
Copy link

cmux-agent bot commented Dec 8, 2025

Older cmux preview screenshots (latest comment is below)

Preview Screenshots

Open Workspace (1 hr expiry) · Open Dev Browser (1 hr expiry) · Open Diff Heatmap

Captured 3 screenshots for commit 20b597f (2025-12-08 18:39:32.957 UTC).

Element screenshot of the new ClickHouse service tile at port 8133

clickhouse-tile.png

Full page screenshot of the Dev Launchpad showing the new ClickHouse and MCPJam Inspector service tiles added in this PR, along with ClickHouse in the Background services list

dev-launchpad-full.png

Element screenshot of the new MCPJam Inspector service tile at port 8126

mcpjam-inspector-tile.png


Generated by cmux preview system

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
.github/workflows/docker-server-build-run.yaml (1)

26-30: ClickHouse setup step looks good and maintains consistency.

The step is well-configured:

  • Uses the same ClickHouse version (25.10) as the docker-compose file for consistency
  • Port mapping (8133:8123) aligns with the development environment configuration
  • Environment variables match those in .env.development
  • Logging output helps with debugging startup issues

Consider whether 5 seconds is sufficient for ClickHouse to fully initialize. Depending on the CI environment performance, you might want to add a health check loop similar to the "Check server health" step, or increase the sleep duration to ensure ClickHouse is ready before subsequent steps.

apps/backend/.env.development (1)

75-82: Consider alphabetically reordering the ClickHouse configuration keys for improved maintainability.

While the variable substitution pattern is already supported via the backend's custom environment variable handling, the ClickHouse keys should follow alphabetical order: STACK_CLICKHOUSE_ADMIN_PASSWORD, STACK_CLICKHOUSE_ADMIN_USER, STACK_CLICKHOUSE_DATABASE, STACK_CLICKHOUSE_EXTERNAL_PASSWORD, STACK_CLICKHOUSE_EXTERNAL_USER, STACK_CLICKHOUSE_URL.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 20b597f and 73407b3.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (6)
  • .github/workflows/docker-server-build-run.yaml (1 hunks)
  • apps/backend/.env.development (1 hunks)
  • apps/backend/package.json (1 hunks)
  • apps/dev-launchpad/public/index.html (2 hunks)
  • docker/dependencies/docker.compose.yaml (2 hunks)
  • packages/stack-shared/src/known-errors.tsx (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • apps/backend/package.json
  • apps/dev-launchpad/public/index.html
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{ts,tsx}: Always add new E2E tests when changing the API or SDK interface
For blocking alerts and errors, never use toast; use alerts instead as they are less easily missed by the user
NEVER try-catch-all, NEVER void a promise, and NEVER .catch(console.error); use loading indicators and async callbacks instead, or use runAsynchronously/runAsynchronouslyWithAlert for error handling
Use ES6 maps instead of records wherever you can

Files:

  • packages/stack-shared/src/known-errors.tsx
**/*.{ts,tsx,css}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{ts,tsx,css}: Keep hover/click transitions snappy and fast; avoid fade-in delays on hover. Apply transitions after action completion instead, like smooth fade-out when hover ends
Use hover-exit transitions instead of hover-enter transitions; for example, use 'transition-colors hover:transition-none' instead of fade-in on hover

Files:

  • packages/stack-shared/src/known-errors.tsx
{.env*,**/*.{ts,tsx,js}}

📄 CodeRabbit inference engine (AGENTS.md)

Prefix environment variables with STACK_ (or NEXT_PUBLIC_STACK_ if public) so changes are picked up by Turborepo and improves readability

Files:

  • packages/stack-shared/src/known-errors.tsx
🧠 Learnings (1)
📚 Learning: 2025-12-04T18:03:49.984Z
Learnt from: CR
Repo: stack-auth/stack-auth PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-04T18:03:49.984Z
Learning: Applies to {.env*,**/*.{ts,tsx,js}} : Prefix environment variables with STACK_ (or NEXT_PUBLIC_STACK_ if public) so changes are picked up by Turborepo and improves readability

Applied to files:

  • apps/backend/.env.development
🪛 dotenv-linter (4.0.0)
apps/backend/.env.development

[warning] 77-77: [SubstitutionKey] The STACK_CLICKHOUSE_URL key is not assigned properly

(SubstitutionKey)


[warning] 78-78: [UnorderedKey] The STACK_CLICKHOUSE_ADMIN_USER key should go before the STACK_CLICKHOUSE_URL key

(UnorderedKey)


[warning] 79-79: [UnorderedKey] The STACK_CLICKHOUSE_ADMIN_PASSWORD key should go before the STACK_CLICKHOUSE_ADMIN_USER key

(UnorderedKey)


[warning] 80-80: [UnorderedKey] The STACK_CLICKHOUSE_EXTERNAL_USER key should go before the STACK_CLICKHOUSE_URL key

(UnorderedKey)


[warning] 81-81: [UnorderedKey] The STACK_CLICKHOUSE_EXTERNAL_PASSWORD key should go before the STACK_CLICKHOUSE_EXTERNAL_USER key

(UnorderedKey)


[warning] 82-82: [UnorderedKey] The STACK_CLICKHOUSE_DATABASE key should go before the STACK_CLICKHOUSE_EXTERNAL_PASSWORD key

(UnorderedKey)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (13)
  • GitHub Check: Vercel Agent Review
  • GitHub Check: build (22.x)
  • GitHub Check: setup-tests-with-custom-base-port
  • GitHub Check: setup-tests
  • GitHub Check: restart-dev-and-test
  • GitHub Check: build (22.x)
  • GitHub Check: docker
  • GitHub Check: all-good
  • GitHub Check: lint_and_build (latest)
  • GitHub Check: restart-dev-and-test-with-custom-base-port
  • GitHub Check: E2E Tests (Node 22.x, Freestyle prod)
  • GitHub Check: E2E Tests (Node 22.x, Freestyle mock)
  • GitHub Check: check_prisma_migrations (22.x)
🔇 Additional comments (2)
docker/dependencies/docker.compose.yaml (1)

208-226: ClickHouse service configuration looks good.

The ClickHouse service is well-configured with:

  • Specific image version (25.10) for reproducibility
  • Proper environment variables for database initialization
  • Persistent volume mount
  • Increased ulimits for production-like behavior
packages/stack-shared/src/known-errors.tsx (1)

1636-1656: LGTM! Analytics error constructors follow established patterns.

The new AnalyticsQueryTimeout and AnalyticsQueryError error constructors are well-implemented:

  • Follow the existing createKnownErrorConstructor pattern consistently
  • Use appropriate 400 status codes for client errors
  • Include descriptive messages and properly structured details objects
  • Correctly implement constructorArgsFromJson for deserialization
  • Properly exported in the KnownErrors collection

Also applies to: 1788-1789

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (2)
apps/backend/scripts/clickhouse-migrations.ts (2)

14-14: Migration will fail: analytics.allowed_table1 doesn't exist.

This GRANT references a table that hasn't been created. The migration will fail at runtime. Per past discussion, this is noted as temporary, but it should be commented out or removed until the table exists.


9-21: Add try-finally to ensure client cleanup on failure.

If any client.exec() call throws, client.close() on line 23 won't be reached, causing a resource leak. Wrap the migration logic in a try-finally block.

 export async function runClickhouseMigrations() {
   console.log("[Clickhouse] Running Clickhouse migrations...");
-  const client = clickhouseAdminClient;
+  const client = createClickhouseClient("admin");
   const clickhouseExternalPassword = getEnvVariable("STACK_CLICKHOUSE_EXTERNAL_PASSWORD");
-  // todo: create migration files
-  await client.exec({
-    query: "CREATE USER IF NOT EXISTS limited_user IDENTIFIED WITH plaintext_password BY {clickhouseExternalPassword:String}",
-    query_params: { clickhouseExternalPassword },
-  });
-  const queries = [
-    "GRANT SELECT ON analytics.allowed_table1 TO limited_user;",
-    "REVOKE ALL ON system.* FROM limited_user;",
-    "REVOKE CREATE, ALTER, DROP, INSERT ON *.* FROM limited_user;"
-  ];
-  for (const query of queries) {
-    console.log("[Clickhouse] " + query);
-    await client.exec({ query });
+  try {
+    await client.exec({
+      query: "CREATE USER IF NOT EXISTS limited_user IDENTIFIED WITH plaintext_password BY {clickhouseExternalPassword:String}",
+      query_params: { clickhouseExternalPassword },
+    });
+    const queries = [
+      "GRANT SELECT ON analytics.allowed_table1 TO limited_user;",
+      "REVOKE ALL ON system.* FROM limited_user;",
+      "REVOKE CREATE, ALTER, DROP, INSERT ON *.* FROM limited_user;"
+    ];
+    for (const query of queries) {
+      console.log("[Clickhouse] " + query);
+      await client.exec({ query });
+    }
+    console.log("[Clickhouse] Clickhouse migrations complete");
+  } finally {
+    await client.close();
   }
-  console.log("[Clickhouse] Clickhouse migrations complete");
-  await client.close();
 }
🧹 Nitpick comments (2)
apps/backend/src/app/api/latest/internal/analytics/query/route.ts (1)

52-53: Consider wrapping getQueryTimingStats errors.

If getQueryTimingStats fails (e.g., query log entry not found), it throws a StackAssertionError which will propagate as an internal error rather than a user-friendly AnalyticsQueryError. Consider wrapping this call for consistent error responses.

     const rows = await resultSet.data.json<Record<string, unknown>[]>();
-    const stats = await getQueryTimingStats(client, queryId);
+    let stats;
+    try {
+      stats = await getQueryTimingStats(client, queryId);
+    } catch (error) {
+      const message = error instanceof Error ? error.message : "Failed to retrieve query timing statistics";
+      throw new KnownErrors.AnalyticsQueryError(message);
+    }
apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts (1)

217-239: Test timeout behavior may be unstable across ClickHouse versions

The SELECT sleep(3) with 100ms timeout can be affected by ClickHouse versions where sleep queries behave inconsistently with max_execution_time enforcement. Recent ClickHouse versions show timeout behavior inconsistencies where max_execution_time is sometimes treated differently.

Consider adding an inline comment explaining that the test expects ANALYTICS_QUERY_ERROR due to server-side timeout enforcement, and note any specific ClickHouse version constraints if the timeout behavior is known to be reliable only in certain versions.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 73407b3 and f6fb8cf.

📒 Files selected for processing (6)
  • apps/backend/scripts/clickhouse-migrations.ts (1 hunks)
  • apps/backend/src/app/api/latest/internal/analytics/query/route.ts (1 hunks)
  • apps/backend/src/lib/clickhouse.tsx (1 hunks)
  • apps/dev-launchpad/public/index.html (2 hunks)
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts (1 hunks)
  • docker/dependencies/docker.compose.yaml (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • apps/dev-launchpad/public/index.html
🧰 Additional context used
📓 Path-based instructions (7)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{ts,tsx}: Always add new E2E tests when changing the API or SDK interface
For blocking alerts and errors, never use toast; use alerts instead as they are less easily missed by the user
NEVER try-catch-all, NEVER void a promise, and NEVER .catch(console.error); use loading indicators and async callbacks instead, or use runAsynchronously/runAsynchronouslyWithAlert for error handling
Use ES6 maps instead of records wherever you can

Files:

  • apps/backend/src/app/api/latest/internal/analytics/query/route.ts
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
  • apps/backend/scripts/clickhouse-migrations.ts
  • apps/backend/src/lib/clickhouse.tsx
**/*.{ts,tsx,css}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{ts,tsx,css}: Keep hover/click transitions snappy and fast; avoid fade-in delays on hover. Apply transitions after action completion instead, like smooth fade-out when hover ends
Use hover-exit transitions instead of hover-enter transitions; for example, use 'transition-colors hover:transition-none' instead of fade-in on hover

Files:

  • apps/backend/src/app/api/latest/internal/analytics/query/route.ts
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
  • apps/backend/scripts/clickhouse-migrations.ts
  • apps/backend/src/lib/clickhouse.tsx
apps/**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

NEVER use Next.js dynamic functions if you can avoid them; prefer using client components and hooks like usePathname instead of await params to keep pages static

Files:

  • apps/backend/src/app/api/latest/internal/analytics/query/route.ts
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
  • apps/backend/scripts/clickhouse-migrations.ts
  • apps/backend/src/lib/clickhouse.tsx
{.env*,**/*.{ts,tsx,js}}

📄 CodeRabbit inference engine (AGENTS.md)

Prefix environment variables with STACK_ (or NEXT_PUBLIC_STACK_ if public) so changes are picked up by Turborepo and improves readability

Files:

  • apps/backend/src/app/api/latest/internal/analytics/query/route.ts
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
  • apps/backend/scripts/clickhouse-migrations.ts
  • apps/backend/src/lib/clickhouse.tsx
**/*.test.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

Prefer .toMatchInlineSnapshot over other selectors in tests when possible; check snapshot-serializer.ts for formatting details

Files:

  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
apps/e2e/**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

Always add new E2E tests when changing API or SDK interface; err on the side of creating too many tests due to the critical nature of the industry

Files:

  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
apps/**/*.tsx

📄 CodeRabbit inference engine (AGENTS.md)

Check existing apps for inspiration when implementing new apps or pages; update apps-frontend.tsx and apps-config.ts to add new apps

Files:

  • apps/backend/src/lib/clickhouse.tsx
🧠 Learnings (5)
📚 Learning: 2025-12-17T01:23:07.460Z
Learnt from: N2D4
Repo: stack-auth/stack-auth PR: 1032
File: apps/backend/src/app/api/latest/analytics/query/route.tsx:20-20
Timestamp: 2025-12-17T01:23:07.460Z
Learning: In apps/backend/src/app/api/latest/analytics/query/route.tsx, the include_all_branches field controls which ClickHouse user type is used: when true, use "admin" authType for access to all branches; when false (default), use "external" authType for limited/filtered branch access.

Applied to files:

  • apps/backend/src/app/api/latest/internal/analytics/query/route.ts
  • apps/backend/scripts/clickhouse-migrations.ts
  • apps/backend/src/lib/clickhouse.tsx
📚 Learning: 2025-12-04T18:03:49.984Z
Learnt from: CR
Repo: stack-auth/stack-auth PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-04T18:03:49.984Z
Learning: Applies to **/*.{ts,tsx} : NEVER try-catch-all, NEVER void a promise, and NEVER .catch(console.error); use loading indicators and async callbacks instead, or use runAsynchronously/runAsynchronouslyWithAlert for error handling

Applied to files:

  • apps/backend/src/app/api/latest/internal/analytics/query/route.ts
📚 Learning: 2025-12-04T18:03:49.984Z
Learnt from: CR
Repo: stack-auth/stack-auth PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-04T18:03:49.984Z
Learning: Applies to apps/e2e/**/*.{ts,tsx} : Always add new E2E tests when changing API or SDK interface; err on the side of creating too many tests due to the critical nature of the industry

Applied to files:

  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
📚 Learning: 2025-12-04T18:03:49.984Z
Learnt from: CR
Repo: stack-auth/stack-auth PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-04T18:03:49.984Z
Learning: Applies to **/*.{ts,tsx} : Always add new E2E tests when changing the API or SDK interface

Applied to files:

  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
📚 Learning: 2025-12-04T18:03:49.984Z
Learnt from: CR
Repo: stack-auth/stack-auth PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-04T18:03:49.984Z
Learning: Applies to packages/stack-shared/src/config/schema.ts : Whenever making backwards-incompatible changes to the config schema, update the migration functions in packages/stack-shared/src/config/schema.ts

Applied to files:

  • apps/backend/scripts/clickhouse-migrations.ts
🧬 Code graph analysis (2)
apps/backend/src/app/api/latest/internal/analytics/query/route.ts (4)
apps/backend/src/route-handlers/smart-route-handler.tsx (1)
  • createSmartRouteHandler (211-296)
packages/stack-shared/src/schema-fields.ts (8)
  • yupObject (247-251)
  • adminAuthTypeSchema (499-499)
  • adaptSchema (330-330)
  • yupBoolean (195-198)
  • yupString (187-190)
  • yupRecord (283-322)
  • yupMixed (210-212)
  • jsonSchema (410-410)
apps/backend/src/prisma-client.tsx (1)
  • client (133-138)
apps/backend/src/lib/clickhouse.tsx (2)
  • clickhouseExternalClient (22-22)
  • getQueryTimingStats (24-60)
apps/backend/src/lib/clickhouse.tsx (2)
packages/stack-shared/src/utils/env.tsx (1)
  • getEnvVariable (16-58)
packages/stack-shared/src/utils/errors.tsx (1)
  • StackAssertionError (69-85)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (12)
  • GitHub Check: check_prisma_migrations (22.x)
  • GitHub Check: lint_and_build (latest)
  • GitHub Check: all-good
  • GitHub Check: Vercel Agent Review
  • GitHub Check: E2E Tests (Node 22.x, Freestyle prod)
  • GitHub Check: E2E Tests (Node 22.x, Freestyle mock)
  • GitHub Check: build (22.x)
  • GitHub Check: setup-tests-with-custom-base-port
  • GitHub Check: restart-dev-and-test-with-custom-base-port
  • GitHub Check: build (22.x)
  • GitHub Check: setup-tests
  • GitHub Check: restart-dev-and-test
🔇 Additional comments (6)
docker/dependencies/docker.compose.yaml (1)

208-226: ClickHouse service configuration looks good.

The service is properly configured with appropriate ports, environment variables, volume mount, and ulimits for file descriptors. The port conflict with drizzle-gateway has been resolved by moving drizzle-gateway to port suffix 35.

apps/backend/src/app/api/latest/internal/analytics/query/route.ts (1)

8-67: Good use of singleton client pattern.

Using the pre-configured clickhouseExternalClient singleton avoids the per-request client creation and resource leak issues flagged in past reviews. The route structure follows the established patterns with proper schema validation.

apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts (1)

4-382: Comprehensive E2E test coverage for the analytics query endpoint.

The tests cover:

  • Basic query execution with admin access
  • Parameterized queries (single and multiple params)
  • Custom timeout handling
  • Required field validation
  • Invalid SQL handling
  • Multiple row results
  • Tenancy ID verification
  • Security restrictions (CREATE, system tables, KILL, INSERT)

This aligns well with the coding guidelines requiring E2E tests for API changes.

apps/backend/src/lib/clickhouse.tsx (3)

5-10: Environment variables correctly use STACK_ prefix.

The environment variables follow the coding guidelines with the STACK_CLICKHOUSE_* prefix, ensuring they're picked up by Turborepo and maintaining naming consistency.


52-59: Proper validation for query log results.

The code now validates that exactly one result is returned from the query log and throws a StackAssertionError with context if not, addressing the past review concern about undefined access.


21-22: Singleton clients created at module load time.

The singleton clients are instantiated when the module is imported. If required environment variables (like STACK_CLICKHOUSE_URL) are missing, this will throw at startup rather than at first use. This matches the pattern used elsewhere (e.g., Prisma client) and provides early failure detection.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (2)
apps/backend/scripts/clickhouse-migrations.ts (2)

6-6: Critical: Still closing the singleton client - this will break subsequent operations.

This issue was previously flagged but remains unfixed. Using clickhouseAdminClient (the module-level singleton) and then calling close() will terminate the shared client for the entire process, breaking any other code that relies on it.

🔎 Apply this diff to create a dedicated client instance:
+import { createClickhouseClient } from "@/lib/clickhouse";
 import { getEnvVariable } from "@stackframe/stack-shared/dist/utils/env";
 
 export async function runClickhouseMigrations() {
   console.log("[Clickhouse] Running Clickhouse migrations...");
-  const client = clickhouseAdminClient;
+  const client = createClickhouseClient("admin");

Also applies to: 26-26


4-27: Critical: Connection leak on migration failure - still not addressed.

This was previously flagged: if any client.exec() call throws (lines 8, 13, 14, or 23), the function exits without reaching client.close() on line 26, leaving the connection open and leaking resources.

🔎 Apply this diff to wrap in try-finally:
 export async function runClickhouseMigrations() {
   console.log("[Clickhouse] Running Clickhouse migrations...");
   const client = clickhouseAdminClient;
-  const clickhouseExternalPassword = getEnvVariable("STACK_CLICKHOUSE_EXTERNAL_PASSWORD");
-  await client.exec({
-    query: "CREATE USER IF NOT EXISTS limited_user IDENTIFIED WITH plaintext_password BY {clickhouseExternalPassword:String}",
-    query_params: { clickhouseExternalPassword },
-  });
-  // todo: create migration files
-  await client.exec({ query: EVENTS_TABLE_BASE_SQL });
-  await client.exec({
-    query: "CREATE ROW POLICY IF NOT EXISTS events_project_isolation ON events FOR SELECT USING project_id = getSetting('SQL_project_id') AND branch_id = getSetting('SQL_branch_id') TO limited_user",
-  });
-  const queries = [
-    "REVOKE ALL PRIVILEGES ON *.* FROM limited_user;",
-    "REVOKE ALL FROM limited_user;",
-    "GRANT SELECT ON analytics.events TO limited_user;",
-  ];
-  for (const query of queries) {
-    await client.exec({ query });
+  try {
+    const clickhouseExternalPassword = getEnvVariable("STACK_CLICKHOUSE_EXTERNAL_PASSWORD");
+    await client.exec({
+      query: "CREATE USER IF NOT EXISTS limited_user IDENTIFIED WITH plaintext_password BY {clickhouseExternalPassword:String}",
+      query_params: { clickhouseExternalPassword },
+    });
+    // todo: create migration files
+    await client.exec({ query: EVENTS_TABLE_BASE_SQL });
+    await client.exec({
+      query: "CREATE ROW POLICY IF NOT EXISTS events_project_isolation ON events FOR SELECT USING project_id = getSetting('SQL_project_id') AND branch_id = getSetting('SQL_branch_id') TO limited_user",
+    });
+    const queries = [
+      "REVOKE ALL PRIVILEGES ON *.* FROM limited_user;",
+      "REVOKE ALL FROM limited_user;",
+      "GRANT SELECT ON analytics.events TO limited_user;",
+    ];
+    for (const query of queries) {
+      await client.exec({ query });
+    }
+    console.log("[Clickhouse] Clickhouse migrations complete");
+  } catch (error) {
+    console.error("[Clickhouse] Migration failed:", error);
+    throw error;
+  } finally {
+    await client.close();
   }
-  console.log("[Clickhouse] Clickhouse migrations complete");
-  await client.close();
 }
🧹 Nitpick comments (1)
apps/backend/scripts/clickhouse-migrations.ts (1)

12-12: Consider implementing a proper migration file system.

The TODO comment indicates awareness that migrations should be in separate files rather than inline SQL. A file-based migration system would provide:

  • Version tracking and ordering
  • Rollback capability
  • Easier testing and review
  • Separation of concerns

Would you like me to propose a migration file structure or open an issue to track this improvement?

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8c030db and e622514.

📒 Files selected for processing (1)
  • apps/backend/scripts/clickhouse-migrations.ts (1 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{ts,tsx}: Always add new E2E tests when changing the API or SDK interface
For blocking alerts and errors, never use toast; use alerts instead as they are less easily missed by the user
NEVER try-catch-all, NEVER void a promise, and NEVER .catch(console.error); use loading indicators and async callbacks instead, or use runAsynchronously/runAsynchronouslyWithAlert for error handling
Use ES6 maps instead of records wherever you can

Files:

  • apps/backend/scripts/clickhouse-migrations.ts
**/*.{ts,tsx,css}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{ts,tsx,css}: Keep hover/click transitions snappy and fast; avoid fade-in delays on hover. Apply transitions after action completion instead, like smooth fade-out when hover ends
Use hover-exit transitions instead of hover-enter transitions; for example, use 'transition-colors hover:transition-none' instead of fade-in on hover

Files:

  • apps/backend/scripts/clickhouse-migrations.ts
apps/**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

NEVER use Next.js dynamic functions if you can avoid them; prefer using client components and hooks like usePathname instead of await params to keep pages static

Files:

  • apps/backend/scripts/clickhouse-migrations.ts
{.env*,**/*.{ts,tsx,js}}

📄 CodeRabbit inference engine (AGENTS.md)

Prefix environment variables with STACK_ (or NEXT_PUBLIC_STACK_ if public) so changes are picked up by Turborepo and improves readability

Files:

  • apps/backend/scripts/clickhouse-migrations.ts
🧠 Learnings (2)
📚 Learning: 2025-12-04T18:03:49.984Z
Learnt from: CR
Repo: stack-auth/stack-auth PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-04T18:03:49.984Z
Learning: Applies to packages/stack-shared/src/config/schema.ts : Whenever making backwards-incompatible changes to the config schema, update the migration functions in packages/stack-shared/src/config/schema.ts

Applied to files:

  • apps/backend/scripts/clickhouse-migrations.ts
📚 Learning: 2025-12-17T01:23:15.483Z
Learnt from: N2D4
Repo: stack-auth/stack-auth PR: 1032
File: apps/backend/src/app/api/latest/analytics/query/route.tsx:20-20
Timestamp: 2025-12-17T01:23:15.483Z
Learning: In apps/backend/src/app/api/latest/analytics/query/route.tsx, the include_all_branches field controls which ClickHouse user type is used: when true, use "admin" authType for access to all branches; when false (default), use "external" authType for limited/filtered branch access.

Applied to files:

  • apps/backend/scripts/clickhouse-migrations.ts
🧬 Code graph analysis (1)
apps/backend/scripts/clickhouse-migrations.ts (2)
apps/backend/src/lib/clickhouse.tsx (1)
  • clickhouseAdminClient (21-21)
packages/stack-shared/src/utils/env.tsx (1)
  • getEnvVariable (16-58)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (13)
  • GitHub Check: Vercel Agent Review
  • GitHub Check: check_prisma_migrations (22.x)
  • GitHub Check: build (22.x)
  • GitHub Check: all-good
  • GitHub Check: restart-dev-and-test
  • GitHub Check: restart-dev-and-test-with-custom-base-port
  • GitHub Check: setup-tests-with-custom-base-port
  • GitHub Check: build (22.x)
  • GitHub Check: setup-tests
  • GitHub Check: lint_and_build (latest)
  • GitHub Check: E2E Tests (Node 22.x, Freestyle prod)
  • GitHub Check: docker
  • GitHub Check: E2E Tests (Node 22.x, Freestyle mock)

});
// todo: create migration files
await client.exec({ query: EVENTS_TABLE_BASE_SQL });
await client.exec({
Copy link
Contributor

@N2D4 N2D4 Dec 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

somewhere here we probably want to set

users_without_row_policies_can_read_rows

maybe some more from here? https://clickhouse.com/docs/operations/server-configuration-parameters/settings#access_control_improvements

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we need to set custom_settings_prefixes as well for it?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually i think we can't change these, but we should at least check that they're the correct value, also select_from_system_db_requires_grant probably

});
// todo: create migration files
await client.exec({ query: EVENTS_TABLE_BASE_SQL });
await client.exec({
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually i think we can't change these, but we should at least check that they're the correct value, also select_from_system_db_requires_grant probably

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
apps/e2e/tests/backend/backend-helpers.ts (1)

89-107: Fix the camelCase detection regex - it only catches uppercase at position 1.

The regex /^[a-z0-9][A-Z][a-z0-9]+$/ only matches strings where the uppercase letter is at index 1 (e.g., "aB1", "uId"), which means it misses nearly all real camelCase identifiers:

  • "userId" - not matched (uppercase at index 4)
  • "accessToken" - not matched (uppercase at index 6)
  • "newUser" - not matched (uppercase at index 3)
  • "afterCallbackRedirectUrl" - not matched

The whitelist is also ineffective since those strings don't match the regex.

🔎 Proposed fix
-      if (key.match(/^[a-z0-9][A-Z][a-z0-9]+$/) && !key.includes("_") && !["newUser", "afterCallbackRedirectUrl"].includes(key)) {
+      if (key.match(/^[a-z][a-z0-9]*[A-Z]/) && !key.includes("_") && !["newUser", "afterCallbackRedirectUrl"].includes(key)) {
         throw new StackAssertionError(`Object has camelCase key (expected snake_case): ${path}.${key}`);
       }

This regex /^[a-z][a-z0-9]*[A-Z]/ correctly detects any string that starts with lowercase and contains at least one uppercase letter, catching all typical camelCase patterns.

🧹 Nitpick comments (3)
apps/backend/src/app/api/latest/internal/analytics/query/route.ts (1)

48-49: Minor inconsistency: readonly is a string while allow_ddl is a number.

Both settings are typically numeric in ClickHouse. While this may work due to type coercion, consider using consistent types for clarity.

🔎 Suggested fix
         max_execution_time: body.timeout_ms / 1000,
-        readonly: "1",
+        readonly: 1,
         allow_ddl: 0,
apps/backend/src/lib/clickhouse.tsx (1)

24-33: Performance note: SYSTEM FLUSH LOGS is blocking.

As noted in the TODO comment, this synchronous log flush adds latency to every analytics query. For production use, consider implementing the async polling approach or accepting slightly stale timing data when the log entry isn't immediately available.

apps/e2e/tests/snapshot-serializer.ts (1)

110-110: Good addition to strip timeout durations.

The regex correctly strips non-deterministic timeout elapsed times from snapshot strings, making tests more stable. The pattern works well for typical timeout messages.

Optional: More precise regex pattern

The current pattern [0-9.]+ could theoretically match invalid sequences like "..." (though unlikely in practice). For stricter matching of valid numbers, consider:

-  [/(Timeout exceeded: elapsed )[0-9.]+( ms)/gi, "$1<stripped time>$2"],
+  [/(Timeout exceeded: elapsed )[0-9]+(?:\.[0-9]+)?( ms)/gi, "$1<stripped time>$2"],

This pattern explicitly matches integers or decimals (e.g., "123" or "123.45") but not malformed inputs.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fae5166 and 49c8d0f.

📒 Files selected for processing (7)
  • apps/backend/.env (1 hunks)
  • apps/backend/.env.development (1 hunks)
  • apps/backend/src/app/api/latest/internal/analytics/query/route.ts (1 hunks)
  • apps/backend/src/lib/clickhouse.tsx (1 hunks)
  • apps/e2e/tests/backend/backend-helpers.ts (1 hunks)
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts (1 hunks)
  • apps/e2e/tests/snapshot-serializer.ts (2 hunks)
🧰 Additional context used
📓 Path-based instructions (7)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{ts,tsx}: Always add new E2E tests when changing the API or SDK interface
For blocking alerts and errors, never use toast; use alerts instead as they are less easily missed by the user
NEVER try-catch-all, NEVER void a promise, and NEVER .catch(console.error); use loading indicators and async callbacks instead, or use runAsynchronously/runAsynchronouslyWithAlert for error handling
Use ES6 maps instead of records wherever you can

Files:

  • apps/e2e/tests/backend/backend-helpers.ts
  • apps/backend/src/lib/clickhouse.tsx
  • apps/backend/src/app/api/latest/internal/analytics/query/route.ts
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
  • apps/e2e/tests/snapshot-serializer.ts
**/*.{ts,tsx,css}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{ts,tsx,css}: Keep hover/click transitions snappy and fast; avoid fade-in delays on hover. Apply transitions after action completion instead, like smooth fade-out when hover ends
Use hover-exit transitions instead of hover-enter transitions; for example, use 'transition-colors hover:transition-none' instead of fade-in on hover

Files:

  • apps/e2e/tests/backend/backend-helpers.ts
  • apps/backend/src/lib/clickhouse.tsx
  • apps/backend/src/app/api/latest/internal/analytics/query/route.ts
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
  • apps/e2e/tests/snapshot-serializer.ts
apps/**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

NEVER use Next.js dynamic functions if you can avoid them; prefer using client components and hooks like usePathname instead of await params to keep pages static

Files:

  • apps/e2e/tests/backend/backend-helpers.ts
  • apps/backend/src/lib/clickhouse.tsx
  • apps/backend/src/app/api/latest/internal/analytics/query/route.ts
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
  • apps/e2e/tests/snapshot-serializer.ts
{.env*,**/*.{ts,tsx,js}}

📄 CodeRabbit inference engine (AGENTS.md)

Prefix environment variables with STACK_ (or NEXT_PUBLIC_STACK_ if public) so changes are picked up by Turborepo and improves readability

Files:

  • apps/e2e/tests/backend/backend-helpers.ts
  • apps/backend/src/lib/clickhouse.tsx
  • apps/backend/src/app/api/latest/internal/analytics/query/route.ts
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
  • apps/e2e/tests/snapshot-serializer.ts
apps/e2e/**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

Always add new E2E tests when changing API or SDK interface; err on the side of creating too many tests due to the critical nature of the industry

Files:

  • apps/e2e/tests/backend/backend-helpers.ts
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
  • apps/e2e/tests/snapshot-serializer.ts
apps/**/*.tsx

📄 CodeRabbit inference engine (AGENTS.md)

Check existing apps for inspiration when implementing new apps or pages; update apps-frontend.tsx and apps-config.ts to add new apps

Files:

  • apps/backend/src/lib/clickhouse.tsx
**/*.test.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

Prefer .toMatchInlineSnapshot over other selectors in tests when possible; check snapshot-serializer.ts for formatting details

Files:

  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
🧠 Learnings (7)
📚 Learning: 2025-12-04T18:03:49.984Z
Learnt from: CR
Repo: stack-auth/stack-auth PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-04T18:03:49.984Z
Learning: Applies to **/*.test.{ts,tsx} : Prefer .toMatchInlineSnapshot over other selectors in tests when possible; check snapshot-serializer.ts for formatting details

Applied to files:

  • apps/e2e/tests/backend/backend-helpers.ts
  • apps/e2e/tests/snapshot-serializer.ts
📚 Learning: 2025-12-04T18:03:49.984Z
Learnt from: CR
Repo: stack-auth/stack-auth PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-04T18:03:49.984Z
Learning: Applies to apps/e2e/**/*.{ts,tsx} : Always add new E2E tests when changing API or SDK interface; err on the side of creating too many tests due to the critical nature of the industry

Applied to files:

  • apps/e2e/tests/backend/backend-helpers.ts
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
📚 Learning: 2025-12-04T18:03:49.984Z
Learnt from: CR
Repo: stack-auth/stack-auth PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-04T18:03:49.984Z
Learning: Applies to **/*.{ts,tsx} : Always add new E2E tests when changing the API or SDK interface

Applied to files:

  • apps/e2e/tests/backend/backend-helpers.ts
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
  • apps/e2e/tests/snapshot-serializer.ts
📚 Learning: 2025-12-04T18:03:49.984Z
Learnt from: CR
Repo: stack-auth/stack-auth PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-04T18:03:49.984Z
Learning: Applies to {.env*,**/*.{ts,tsx,js}} : Prefix environment variables with STACK_ (or NEXT_PUBLIC_STACK_ if public) so changes are picked up by Turborepo and improves readability

Applied to files:

  • apps/backend/.env.development
  • apps/backend/.env
📚 Learning: 2025-12-17T01:23:15.483Z
Learnt from: N2D4
Repo: stack-auth/stack-auth PR: 1032
File: apps/backend/src/app/api/latest/analytics/query/route.tsx:20-20
Timestamp: 2025-12-17T01:23:15.483Z
Learning: In apps/backend/src/app/api/latest/analytics/query/route.tsx, the include_all_branches field controls which ClickHouse user type is used: when true, use "admin" authType for access to all branches; when false (default), use "external" authType for limited/filtered branch access.

Applied to files:

  • apps/backend/src/lib/clickhouse.tsx
  • apps/backend/src/app/api/latest/internal/analytics/query/route.ts
📚 Learning: 2025-12-04T18:03:49.984Z
Learnt from: CR
Repo: stack-auth/stack-auth PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-04T18:03:49.984Z
Learning: Applies to **/*.{ts,tsx} : NEVER try-catch-all, NEVER void a promise, and NEVER .catch(console.error); use loading indicators and async callbacks instead, or use runAsynchronously/runAsynchronouslyWithAlert for error handling

Applied to files:

  • apps/backend/src/app/api/latest/internal/analytics/query/route.ts
📚 Learning: 2025-08-12T17:55:06.710Z
Learnt from: madster456
Repo: stack-auth/stack-auth PR: 769
File: apps/backend/.env:67-71
Timestamp: 2025-08-12T17:55:06.710Z
Learning: In the stack-auth project, .env files use inline comments after the = sign (e.g., KEY=# description) as a consistent pattern throughout the configuration file. The maintainers prefer this format over separate comment lines above each key.

Applied to files:

  • apps/backend/.env
🧬 Code graph analysis (1)
apps/backend/src/lib/clickhouse.tsx (2)
packages/stack-shared/src/utils/env.tsx (1)
  • getEnvVariable (16-58)
packages/stack-shared/src/utils/errors.tsx (1)
  • StackAssertionError (69-85)
🪛 dotenv-linter (4.0.0)
apps/backend/.env.development

[warning] 77-77: [SubstitutionKey] The STACK_CLICKHOUSE_URL key is not assigned properly

(SubstitutionKey)


[warning] 78-78: [UnorderedKey] The STACK_CLICKHOUSE_ADMIN_USER key should go before the STACK_CLICKHOUSE_URL key

(UnorderedKey)


[warning] 79-79: [UnorderedKey] The STACK_CLICKHOUSE_ADMIN_PASSWORD key should go before the STACK_CLICKHOUSE_ADMIN_USER key

(UnorderedKey)


[warning] 80-80: [UnorderedKey] The STACK_CLICKHOUSE_EXTERNAL_PASSWORD key should go before the STACK_CLICKHOUSE_URL key

(UnorderedKey)

apps/backend/.env

[warning] 86-86: [ValueWithoutQuotes] This value needs to be surrounded in quotes

(ValueWithoutQuotes)


[warning] 87-87: [UnorderedKey] The STACK_CLICKHOUSE_ADMIN_USER key should go before the STACK_CLICKHOUSE_URL key

(UnorderedKey)


[warning] 87-87: [ValueWithoutQuotes] This value needs to be surrounded in quotes

(ValueWithoutQuotes)


[warning] 88-88: [UnorderedKey] The STACK_CLICKHOUSE_ADMIN_PASSWORD key should go before the STACK_CLICKHOUSE_ADMIN_USER key

(UnorderedKey)


[warning] 88-88: [ValueWithoutQuotes] This value needs to be surrounded in quotes

(ValueWithoutQuotes)


[warning] 89-89: [UnorderedKey] The STACK_CLICKHOUSE_EXTERNAL_PASSWORD key should go before the STACK_CLICKHOUSE_URL key

(UnorderedKey)


[warning] 89-89: [ValueWithoutQuotes] This value needs to be surrounded in quotes

(ValueWithoutQuotes)


[warning] 91-91: [ExtraBlankLine] Extra blank line detected

(ExtraBlankLine)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Vercel Agent Review
  • GitHub Check: Cursor Bugbot
🔇 Additional comments (8)
apps/backend/.env (1)

85-90: LGTM!

The ClickHouse environment variables are properly prefixed with STACK_ and follow the existing inline comment convention used throughout this file. Based on learnings, this .env format is the established pattern for this project.

apps/backend/.env.development (1)

75-80: LGTM!

The ClickHouse development configuration correctly follows established patterns: port variable substitution (${NEXT_PUBLIC_STACK_PORT_PREFIX:-81}33) and password placeholder format match the existing conventions in this file.

apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts (1)

1-15: LGTM! Comprehensive test coverage.

The test helper function and overall structure are well-designed. Tests appropriately use inline snapshots and cover the key functionality including authentication, parameters, timeouts, and error handling.

apps/backend/src/app/api/latest/internal/analytics/query/route.ts (2)

34-52: Handler structure addresses previous concerns.

The use of the singleton clickhouseExternalClient resolves the resource leak issue from prior reviews. The clickhouse_settings now properly include max_execution_time, readonly, and allow_ddl as previously suggested.


20-20: Clarify: 1 second minimum is technically effective, but document known limitations.

The 1 second minimum will be enforced by ClickHouse for most queries. However, timeouts may not be checked during certain query stages like sorting, joining, or finalizing aggregations. Consider adding a comment documenting this limitation rather than raising the minimum, or reference the ClickHouse documentation about timeout_overflow_mode for edge-case handling.

apps/backend/src/lib/clickhouse.tsx (2)

1-23: LGTM! Well-structured ClickHouse client module.

The singleton client pattern (clickhouseAdminClient, clickhouseExternalClient) is appropriate for connection pooling and addresses previous resource leak concerns. The factory function allows flexibility for different auth types when needed.


52-59: Good defensive check for query log results.

The length check and StackAssertionError properly addresses the previous review concern about potential undefined access when the query log entry is not found.

apps/e2e/tests/snapshot-serializer.ts (1)

71-72: No changes needed. The field names cpu_time and wall_clock_time correctly match the actual field names in analytics query responses, as confirmed by the test snapshots. The snapshot serializer will properly strip these timing values to prevent flaky tests.

Likely an incorrect or invalid review comment.

const queries = [
"REVOKE ALL PRIVILEGES ON *.* FROM limited_user;",
"REVOKE ALL FROM limited_user;",
"GRANT SELECT ON analytics.events TO limited_user;",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Database mismatch between table creation and grants

The events table is created in the client's default database (determined by STACK_CLICKHOUSE_DATABASE env var, which defaults to "default"), but the grants on line 20 explicitly reference analytics.events. Since clickhouseAdminClient is created with clickhouseDefaultDatabase which defaults to "default", the table gets created in default.events while permissions are granted for analytics.events. This means the limited_user will not have access to query the actual events table, causing all analytics queries to fail with permission errors.

Additional Locations (1)

Fix in Cursor Fix in Web

await client.exec({ query });
}
console.log("[Clickhouse] Clickhouse migrations complete");
await client.close();
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Singleton ClickHouse client closed after migrations

The runClickhouseMigrations() function closes the clickhouseAdminClient singleton at line 26. While migrations currently run as a separate script, this singleton is exported from clickhouse.tsx and used in other parts of the codebase (events.tsx line 198, migrate-events/route.tsx line 147). If migrations are ever run in the same process as the server, or if the code structure changes, subsequent operations using clickhouseAdminClient would fail with a closed client error. The PR discussion also flagged that creating/closing clients repeatedly is unnecessary.

Fix in Cursor Fix in Web

body: JSON.stringify({
query: options.query,
params: options.params ?? {},
timeout_ms: options.timeout_ms ?? 1000,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Client defaults analytics query timeout to 1 second instead of 10

The queryAnalytics client method defaults timeout_ms to 1000 (1 second), while the server route defaults to 10000 (10 seconds) and enforces a minimum of 1000ms. This 10x mismatch means clients calling queryAnalytics() without specifying a timeout get the minimum allowed value rather than a reasonable default. Most non-trivial analytics queries would fail with timeout errors since 1 second is extremely short. This appears to be a typo where 1000 was written instead of 10000.

Fix in Cursor Fix in Web

}
} else {
for (const [key, value] of Object.entries(obj)) {
if (key.match(/[a-z0-9][A-Z][a-z0-9]+/) && !key.includes("_") && !["newUser", "afterCallbackRedirectUrl"].includes(key)) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Test regex change weakens camelCase detection for API responses

The camelCase detection regex was changed from /[a-z0-9][A-Z][a-z0-9]+/ to /^[a-z0-9][A-Z][a-z0-9]+$/. The original regex matched camelCase patterns anywhere in a string, but the new anchored regex only matches if the entire key is exactly one lowercase char + one uppercase char + one or more lowercase chars. This misses standard camelCase keys like myKeyValue, userName, or isActive because they have multiple lowercase characters before the first uppercase. The test helper will now fail to detect most snake_case violations in API responses.

Fix in Cursor Fix in Web

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 49c8d0f and 1134883.

📒 Files selected for processing (6)
  • apps/backend/package.json (2 hunks)
  • apps/backend/src/lib/clickhouse.tsx (1 hunks)
  • apps/dev-launchpad/public/index.html (3 hunks)
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-events.test.ts (1 hunks)
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts (1 hunks)
  • package.json (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
  • apps/backend/package.json
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
  • apps/backend/src/lib/clickhouse.tsx
🧰 Additional context used
📓 Path-based instructions (6)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{ts,tsx}: Always add new E2E tests when changing the API or SDK interface
For blocking alerts and errors, never use toast; use alerts instead as they are less easily missed by the user
NEVER try-catch-all, NEVER void a promise, and NEVER .catch(console.error); use loading indicators and async callbacks instead, or use runAsynchronously/runAsynchronouslyWithAlert for error handling
Use ES6 maps instead of records wherever you can

Files:

  • apps/e2e/tests/backend/endpoints/api/v1/analytics-events.test.ts
**/*.test.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

Prefer .toMatchInlineSnapshot over other selectors in tests when possible; check snapshot-serializer.ts for formatting details

Files:

  • apps/e2e/tests/backend/endpoints/api/v1/analytics-events.test.ts
**/*.{ts,tsx,css}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{ts,tsx,css}: Keep hover/click transitions snappy and fast; avoid fade-in delays on hover. Apply transitions after action completion instead, like smooth fade-out when hover ends
Use hover-exit transitions instead of hover-enter transitions; for example, use 'transition-colors hover:transition-none' instead of fade-in on hover

Files:

  • apps/e2e/tests/backend/endpoints/api/v1/analytics-events.test.ts
apps/**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

NEVER use Next.js dynamic functions if you can avoid them; prefer using client components and hooks like usePathname instead of await params to keep pages static

Files:

  • apps/e2e/tests/backend/endpoints/api/v1/analytics-events.test.ts
{.env*,**/*.{ts,tsx,js}}

📄 CodeRabbit inference engine (AGENTS.md)

Prefix environment variables with STACK_ (or NEXT_PUBLIC_STACK_ if public) so changes are picked up by Turborepo and improves readability

Files:

  • apps/e2e/tests/backend/endpoints/api/v1/analytics-events.test.ts
apps/e2e/**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

Always add new E2E tests when changing API or SDK interface; err on the side of creating too many tests due to the critical nature of the industry

Files:

  • apps/e2e/tests/backend/endpoints/api/v1/analytics-events.test.ts
🧠 Learnings (3)
📚 Learning: 2025-12-04T18:03:49.984Z
Learnt from: CR
Repo: stack-auth/stack-auth PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-04T18:03:49.984Z
Learning: Applies to apps/e2e/**/*.{ts,tsx} : Always add new E2E tests when changing API or SDK interface; err on the side of creating too many tests due to the critical nature of the industry

Applied to files:

  • apps/e2e/tests/backend/endpoints/api/v1/analytics-events.test.ts
📚 Learning: 2025-12-04T18:03:49.984Z
Learnt from: CR
Repo: stack-auth/stack-auth PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-04T18:03:49.984Z
Learning: Applies to **/*.{ts,tsx} : Always add new E2E tests when changing the API or SDK interface

Applied to files:

  • apps/e2e/tests/backend/endpoints/api/v1/analytics-events.test.ts
📚 Learning: 2025-12-04T18:03:49.984Z
Learnt from: CR
Repo: stack-auth/stack-auth PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-04T18:03:49.984Z
Learning: Run pnpm install to install dependencies, pnpm test run for testing with Vitest, pnpm lint for linting (use --fix flag to auto-fix), pnpm typecheck for type checking

Applied to files:

  • package.json
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Vercel Agent Review
  • GitHub Check: Cursor Bugbot
🔇 Additional comments (6)
package.json (1)

33-33: LGTM! Follows existing pattern for database CLI tools.

The new clickhouse script mirrors the psql script pattern on line 32, running pre-checks and delegating to the backend package. This provides a consistent developer experience for interacting with both database systems.

apps/dev-launchpad/public/index.html (2)

132-132: Clarify: Is this entry still needed in background services?

Based on the past review comments, you mentioned removing ClickHouse from background services since it has an HTML interface. However, the "ClickHouse native interface" entry is still present here. Please clarify whether both the background service (native interface) and the app entry (HTTP interface) are intentionally needed, or if this should be removed as previously indicated.


292-292: LGTM!

The port suffix change from "33" to "35" appropriately avoids conflict with the new ClickHouse service.

apps/e2e/tests/backend/endpoints/api/v1/analytics-events.test.ts (3)

5-29: LGTM: Well-structured parameterized query helper.

The queryEvents helper correctly builds a parameterized ClickHouse query with conditional filters, properly mapping parameters to prevent SQL injection.


31-52: LGTM: Robust retry logic for eventual consistency.

The fetchEventsWithRetry helper properly handles eventual consistency by retrying queries with configurable attempts and delays, breaking early on errors.


127-150: LGTM: Consistent use of fetchEventsWithRetry and proper user isolation testing.

This test correctly demonstrates the pattern of using the fetchEventsWithRetry helper and thoroughly verifies that analytics events are properly filtered by user within a project.

@cmux-agent
Copy link

cmux-agent bot commented Dec 19, 2025

Older cmux preview screenshots (latest comment is below)

Preview Screenshots

Open Diff Heatmap

Preview screenshots are being captured...

Workspace and dev browser links will appear here once the preview environment is ready.


Generated by cmux preview system

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts (1)

1-1315: Optional: Consider adding functional tests for actual analytics data querying.

While the security and permission tests are comprehensive, there are no tests that:

  1. Insert test events into analytics.events
  2. Query them with filters (by event_type, date range, etc.)
  3. Verify that row policies correctly filter data by project_id/branch_id
  4. Test aggregations and analytics queries on real data

These functional tests would complement the excellent security coverage. However, they may already exist in other test files or be planned for future implementation.

Would you like me to generate example tests for analytics data querying, or search the codebase to verify if such tests already exist?

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1134883 and 8e72a7b.

📒 Files selected for processing (2)
  • apps/backend/src/app/api/latest/internal/analytics/query/route.ts (1 hunks)
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • apps/backend/src/app/api/latest/internal/analytics/query/route.ts
🧰 Additional context used
📓 Path-based instructions (6)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{ts,tsx}: Always add new E2E tests when changing the API or SDK interface
For blocking alerts and errors, never use toast; use alerts instead as they are less easily missed by the user
NEVER try-catch-all, NEVER void a promise, and NEVER .catch(console.error); use loading indicators and async callbacks instead, or use runAsynchronously/runAsynchronouslyWithAlert for error handling
Use ES6 maps instead of records wherever you can

Files:

  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
**/*.test.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

Prefer .toMatchInlineSnapshot over other selectors in tests when possible; check snapshot-serializer.ts for formatting details

Files:

  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
**/*.{ts,tsx,css}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{ts,tsx,css}: Keep hover/click transitions snappy and fast; avoid fade-in delays on hover. Apply transitions after action completion instead, like smooth fade-out when hover ends
Use hover-exit transitions instead of hover-enter transitions; for example, use 'transition-colors hover:transition-none' instead of fade-in on hover

Files:

  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
apps/**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

NEVER use Next.js dynamic functions if you can avoid them; prefer using client components and hooks like usePathname instead of await params to keep pages static

Files:

  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
{.env*,**/*.{ts,tsx,js}}

📄 CodeRabbit inference engine (AGENTS.md)

Prefix environment variables with STACK_ (or NEXT_PUBLIC_STACK_ if public) so changes are picked up by Turborepo and improves readability

Files:

  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
apps/e2e/**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

Always add new E2E tests when changing API or SDK interface; err on the side of creating too many tests due to the critical nature of the industry

Files:

  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
🧠 Learnings (3)
📚 Learning: 2025-12-04T18:03:49.984Z
Learnt from: CR
Repo: stack-auth/stack-auth PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-04T18:03:49.984Z
Learning: Applies to apps/e2e/**/*.{ts,tsx} : Always add new E2E tests when changing API or SDK interface; err on the side of creating too many tests due to the critical nature of the industry

Applied to files:

  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
📚 Learning: 2025-12-04T18:03:49.984Z
Learnt from: CR
Repo: stack-auth/stack-auth PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-04T18:03:49.984Z
Learning: Applies to **/*.{ts,tsx} : Always add new E2E tests when changing the API or SDK interface

Applied to files:

  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
📚 Learning: 2025-12-04T18:03:49.984Z
Learnt from: CR
Repo: stack-auth/stack-auth PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-04T18:03:49.984Z
Learning: Applies to **/*.test.{ts,tsx} : Prefer .toMatchInlineSnapshot over other selectors in tests when possible; check snapshot-serializer.ts for formatting details

Applied to files:

  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (13)
  • GitHub Check: Vercel Agent Review
  • GitHub Check: build (22.x)
  • GitHub Check: E2E Tests (Node 22.x, Freestyle mock)
  • GitHub Check: E2E Tests (Node 22.x, Freestyle prod)
  • GitHub Check: all-good
  • GitHub Check: setup-tests-with-custom-base-port
  • GitHub Check: check_prisma_migrations (22.x)
  • GitHub Check: lint_and_build (latest)
  • GitHub Check: restart-dev-and-test-with-custom-base-port
  • GitHub Check: setup-tests
  • GitHub Check: restart-dev-and-test
  • GitHub Check: build (22.x)
  • GitHub Check: docker
🔇 Additional comments (5)
apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts (5)

1-15: LGTM: Clean test helper with proper isolation.

The runQuery helper appropriately creates a fresh project for each test, ensuring test isolation. The admin access and internal endpoint usage are correct for testing this analytics query interface.


17-223: LGTM: Comprehensive basic functionality coverage.

Excellent coverage of core query functionality including parameterization, timeouts, validation, and error handling. The test for SQL_project_id and SQL_branch_id settings (lines 200-223) is particularly important for verifying tenant isolation.


226-423: LGTM: Thorough permission and grant verification.

The comprehensive testing of user restrictions, grants, and permission boundaries is excellent. The verification that the user is limited_user with restricted grants (lines 335-423) provides strong assurance that the security model is enforced.


512-571: LGTM: Database context and SQL injection tests are solid.

The verification that the current database is default (line 541) explains why SHOW TABLES returns empty (line 521), and the SQL injection test (lines 552-571) confirms that parameterized queries properly escape dangerous input.


573-1315: Excellent: Comprehensive security testing covers all major attack vectors.

This is exceptional security coverage testing:

  • Setting override prevention (lines 573-613): Prevents tenant boundary violations
  • Injection bypasses (lines 615-655): Subqueries, UNION, parameters
  • Data exfiltration (lines 657-739): file(), url(), remote(), cluster(), s3()
  • Dangerous functions (lines 833-957): dictGet(), executable(), jdbc(), mysql(), postgresql()
  • Schema manipulation (lines 1043-1146): DROP, ALTER, TRUNCATE, DELETE, UPDATE
  • Obfuscation attempts (lines 959-999, 770-789): Comments, CTEs, multi-statements
  • Resource limits (lines 1212-1252): Prevents DoS via generateRandom/numbers

This level of testing demonstrates strong security awareness and is appropriate for a critical authentication/analytics system. As per coding guidelines and learnings, erring on the side of too many tests is correct for this industry.

@cmux-agent
Copy link

cmux-agent bot commented Dec 19, 2025

Older cmux preview screenshots (latest comment is below)

Preview Screenshots

Open Diff Heatmap

Preview screenshots are being captured...

Workspace and dev browser links will appear here once the preview environment is ready.


Generated by cmux preview system

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
apps/backend/scripts/clickhouse-migrations.ts (1)

6-6: Critical: Closing singleton client breaks subsequent operations.

The migration uses clickhouseAdminClient singleton but then closes it at line 26. This singleton is shared across the codebase and closing it will cause all subsequent ClickHouse operations to fail with a closed client error.

🔎 Fix by creating a dedicated client instance
 export async function runClickhouseMigrations() {
   console.log("[Clickhouse] Running Clickhouse migrations...");
-  const client = clickhouseAdminClient;
+  const client = createClickhouseClient("admin");
   const clickhouseExternalPassword = getEnvVariable("STACK_CLICKHOUSE_EXTERNAL_PASSWORD");

Then add proper error handling to ensure cleanup:

 export async function runClickhouseMigrations() {
   console.log("[Clickhouse] Running Clickhouse migrations...");
   const client = createClickhouseClient("admin");
-  const clickhouseExternalPassword = getEnvVariable("STACK_CLICKHOUSE_EXTERNAL_PASSWORD");
-  await client.exec({
-    query: "CREATE USER IF NOT EXISTS limited_user IDENTIFIED WITH sha256_password BY {clickhouseExternalPassword:String}",
-    query_params: { clickhouseExternalPassword },
-  });
-  // todo: create migration files
-  await client.exec({ query: EVENTS_TABLE_BASE_SQL });
-  const queries = [
-    "REVOKE ALL PRIVILEGES ON *.* FROM limited_user;",
-    "REVOKE ALL FROM limited_user;",
-    "GRANT SELECT ON analytics.events TO limited_user;",
-  ];
-  await client.exec({
-    query: "CREATE ROW POLICY IF NOT EXISTS events_project_isolation ON analytics.events FOR SELECT USING project_id = getSetting('SQL_project_id') AND branch_id = getSetting('SQL_branch_id') TO limited_user",
-  });
-  for (const query of queries) {
-    await client.exec({ query });
+  try {
+    const clickhouseExternalPassword = getEnvVariable("STACK_CLICKHOUSE_EXTERNAL_PASSWORD");
+    await client.exec({
+      query: "CREATE USER IF NOT EXISTS limited_user IDENTIFIED WITH sha256_password BY {clickhouseExternalPassword:String}",
+      query_params: { clickhouseExternalPassword },
+    });
+    // todo: create migration files
+    await client.exec({ query: EVENTS_TABLE_BASE_SQL });
+    const queries = [
+      "REVOKE ALL PRIVILEGES ON *.* FROM limited_user;",
+      "REVOKE ALL FROM limited_user;",
+      "GRANT SELECT ON analytics.events TO limited_user;",
+    ];
+    await client.exec({
+      query: "CREATE ROW POLICY IF NOT EXISTS events_project_isolation ON analytics.events FOR SELECT USING project_id = getSetting('SQL_project_id') AND branch_id = getSetting('SQL_branch_id') TO limited_user",
+    });
+    for (const query of queries) {
+      await client.exec({ query });
+    }
+    console.log("[Clickhouse] Clickhouse migrations complete");
+  } finally {
+    await client.close();
   }
-  console.log("[Clickhouse] Clickhouse migrations complete");
-  await client.close();
 }

You'll need to import createClickhouseClient from @/lib/clickhouse.

Also applies to: 26-26

🧹 Nitpick comments (5)
apps/e2e/tests/backend/endpoints/api/v1/analytics-events.test.ts (3)

28-49: Consider adjusting retry logic to match the attempts parameter semantics.

The current implementation makes attempts + 1 queries total: one initial query (line 35) plus up to attempts additional queries in the loop (line 45). With the default attempts: 5, this results in up to 6 queries.

If attempts is meant to represent the total number of query attempts, consider starting the loop at 1 instead of 0, or adjusting the loop condition to attempt < attempts - 1.

🔎 Proposed adjustment
 const fetchEventsWithRetry = async (
   params: { userId?: string, eventType?: string },
   options: { attempts?: number, delayMs?: number } = {}
 ) => {
   const attempts = options.attempts ?? 5;
   const delayMs = options.delayMs ?? 250;
 
   let response = await queryEvents(params);
-  for (let attempt = 0; attempt < attempts; attempt++) {
+  for (let attempt = 1; attempt < attempts; attempt++) {
     if (response.status !== 200) {
       break;
     }
     const results = Array.isArray(response.body?.result) ? response.body.result : [];
     if (results.length > 0) {
       break;
     }
     await wait(delayMs);
     response = await queryEvents(params);
   }
 
   return response;
 };

52-70: Consider using inline snapshots for consistency.

Test 2 and Test 3's project B verification both use .toMatchInlineSnapshot(), which aligns with the coding guideline: "Prefer .toMatchInlineSnapshot over other selectors in tests when possible." This test could be refactored to use the same pattern for consistency and better change detection.

As per coding guidelines, inline snapshots help catch unexpected changes in response structure.

🔎 Proposed refactor using inline snapshot
 it("stores backend events in ClickHouse", async ({ expect }) => {
   const { projectId } = await Project.createAndSwitch({ config: { magic_link_enabled: true } });
   const { userId } = await Auth.Otp.signIn();
 
   const queryResponse = await fetchEventsWithRetry({
     userId,
     eventType: "$session-activity",
   });
 
-  expect(queryResponse.status).toBe(200);
-  const results = Array.isArray(queryResponse.body?.result) ? queryResponse.body.result : [];
-  expect(results.length).toBeGreaterThan(0);
-  expect(results[0]).toMatchObject({
-    event_type: "$session-activity",
-    project_id: projectId,
-    branch_id: "main",
-    user_id: userId,
-  });
+  expect(queryResponse).toMatchInlineSnapshot(`
+    NiceResponse {
+      "status": 200,
+      "body": {
+        "result": [
+          {
+            "branch_id": "main",
+            "event_type": "$session-activity",
+            "project_id": "<stripped UUID>",
+            "team_id": "",
+            "user_id": "<stripped UUID>",
+          },
+        ],
+        "stats": {
+          "cpu_time": <stripped field 'cpu_time'>,
+          "wall_clock_time": <stripped field 'wall_clock_time'>,
+        },
+      },
+      "headers": Headers { <some fields may have been hidden> },
+    }
+  `);
 });

129-152: Consider adding inline snapshots for consistency.

Similar to Test 1, this test uses individual assertions while Test 2 demonstrates the inline snapshot pattern. You could enhance this test by using inline snapshots for the response structure while retaining the .every() assertions for user ID validation.

As per coding guidelines, inline snapshots help catch unexpected changes in response structure.

🔎 Proposed enhancement with inline snapshots
 it("filters analytics events by user within a project", async ({ expect }) => {
   await Project.createAndSwitch({ config: { magic_link_enabled: true } });
   const { userId: userA } = await Auth.Otp.signIn();
   await bumpEmailAddress();
   const { userId: userB } = await Auth.Otp.signIn();
 
   const userAResponse = await fetchEventsWithRetry({
     userId: userA,
     eventType: "$session-activity",
   });
-  expect(userAResponse.status).toBe(200);
+  expect(userAResponse).toMatchInlineSnapshot(`
+    NiceResponse {
+      "status": 200,
+      "body": {
+        "result": [
+          {
+            "branch_id": "main",
+            "event_type": "$session-activity",
+            "project_id": "<stripped UUID>",
+            "team_id": "",
+            "user_id": "<stripped UUID>",
+          },
+        ],
+        "stats": {
+          "cpu_time": <stripped field 'cpu_time'>,
+          "wall_clock_time": <stripped field 'wall_clock_time'>,
+        },
+      },
+      "headers": Headers { <some fields may have been hidden> },
+    }
+  `);
   const userAResults = Array.isArray(userAResponse.body?.result) ? userAResponse.body.result : [];
-  expect(userAResults.length).toBeGreaterThan(0);
   expect(userAResults.every((row: any) => row.user_id === userA)).toBe(true);
 
   const userBResponse = await fetchEventsWithRetry({
     userId: userB,
     eventType: "$session-activity",
   });
-  expect(userBResponse.status).toBe(200);
+  expect(userBResponse).toMatchInlineSnapshot(`
+    NiceResponse {
+      "status": 200,
+      "body": {
+        "result": [
+          {
+            "branch_id": "main",
+            "event_type": "$session-activity",
+            "project_id": "<stripped UUID>",
+            "team_id": "",
+            "user_id": "<stripped UUID>",
+          },
+        ],
+        "stats": {
+          "cpu_time": <stripped field 'cpu_time'>,
+          "wall_clock_time": <stripped field 'wall_clock_time'>,
+        },
+      },
+      "headers": Headers { <some fields may have been hidden> },
+    }
+  `);
   const userBResults = Array.isArray(userBResponse.body?.result) ? userBResponse.body.result : [];
-  expect(userBResults.length).toBeGreaterThan(0);
   expect(userBResults.every((row: any) => row.user_id === userB)).toBe(true);
 });
apps/backend/scripts/clickhouse-migrations.ts (2)

14-24: Consider reordering permission statements for clarity.

The REVOKE/GRANT statements are executed after the row policy is created. For better readability and logical flow, consider executing the permission setup immediately after user creation.

🔎 Suggested reordering
   await client.exec({
     query: "CREATE USER IF NOT EXISTS limited_user IDENTIFIED WITH sha256_password BY {clickhouseExternalPassword:String}",
     query_params: { clickhouseExternalPassword },
   });
+  const queries = [
+    "REVOKE ALL PRIVILEGES ON *.* FROM limited_user;",
+    "REVOKE ALL FROM limited_user;",
+    "GRANT SELECT ON analytics.events TO limited_user;",
+  ];
+  for (const query of queries) {
+    await client.exec({ query });
+  }
   // todo: create migration files
   await client.exec({ query: EVENTS_TABLE_BASE_SQL });
-  const queries = [
-    "REVOKE ALL PRIVILEGES ON *.* FROM limited_user;",
-    "REVOKE ALL FROM limited_user;",
-    "GRANT SELECT ON analytics.events TO limited_user;",
-  ];
   await client.exec({
     query: "CREATE ROW POLICY IF NOT EXISTS events_project_isolation ON analytics.events FOR SELECT USING project_id = getSetting('SQL_project_id') AND branch_id = getSetting('SQL_branch_id') TO limited_user",
   });
-  for (const query of queries) {
-    await client.exec({ query });
-  }

12-12: TODO: Migration file system not yet implemented.

The inline approach works for initial setup but doesn't support versioning or rollback. A proper migration system with numbered migration files would be more maintainable.

Do you want me to generate a migration file structure with version tracking, or open an issue to track this improvement?

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8e72a7b and ba8110f.

📒 Files selected for processing (4)
  • apps/backend/scripts/clickhouse-migrations.ts (1 hunks)
  • apps/backend/src/lib/events.tsx (4 hunks)
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-events.test.ts (1 hunks)
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
🧰 Additional context used
📓 Path-based instructions (7)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{ts,tsx}: Always add new E2E tests when changing the API or SDK interface
For blocking alerts and errors, never use toast; use alerts instead as they are less easily missed by the user
NEVER try-catch-all, NEVER void a promise, and NEVER .catch(console.error); use loading indicators and async callbacks instead, or use runAsynchronously/runAsynchronouslyWithAlert for error handling
Use ES6 maps instead of records wherever you can

Files:

  • apps/e2e/tests/backend/endpoints/api/v1/analytics-events.test.ts
  • apps/backend/src/lib/events.tsx
  • apps/backend/scripts/clickhouse-migrations.ts
**/*.test.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

Prefer .toMatchInlineSnapshot over other selectors in tests when possible; check snapshot-serializer.ts for formatting details

Files:

  • apps/e2e/tests/backend/endpoints/api/v1/analytics-events.test.ts
**/*.{ts,tsx,css}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{ts,tsx,css}: Keep hover/click transitions snappy and fast; avoid fade-in delays on hover. Apply transitions after action completion instead, like smooth fade-out when hover ends
Use hover-exit transitions instead of hover-enter transitions; for example, use 'transition-colors hover:transition-none' instead of fade-in on hover

Files:

  • apps/e2e/tests/backend/endpoints/api/v1/analytics-events.test.ts
  • apps/backend/src/lib/events.tsx
  • apps/backend/scripts/clickhouse-migrations.ts
apps/**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

NEVER use Next.js dynamic functions if you can avoid them; prefer using client components and hooks like usePathname instead of await params to keep pages static

Files:

  • apps/e2e/tests/backend/endpoints/api/v1/analytics-events.test.ts
  • apps/backend/src/lib/events.tsx
  • apps/backend/scripts/clickhouse-migrations.ts
{.env*,**/*.{ts,tsx,js}}

📄 CodeRabbit inference engine (AGENTS.md)

Prefix environment variables with STACK_ (or NEXT_PUBLIC_STACK_ if public) so changes are picked up by Turborepo and improves readability

Files:

  • apps/e2e/tests/backend/endpoints/api/v1/analytics-events.test.ts
  • apps/backend/src/lib/events.tsx
  • apps/backend/scripts/clickhouse-migrations.ts
apps/e2e/**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

Always add new E2E tests when changing API or SDK interface; err on the side of creating too many tests due to the critical nature of the industry

Files:

  • apps/e2e/tests/backend/endpoints/api/v1/analytics-events.test.ts
apps/**/*.tsx

📄 CodeRabbit inference engine (AGENTS.md)

Check existing apps for inspiration when implementing new apps or pages; update apps-frontend.tsx and apps-config.ts to add new apps

Files:

  • apps/backend/src/lib/events.tsx
🧠 Learnings (4)
📚 Learning: 2025-12-04T18:03:49.984Z
Learnt from: CR
Repo: stack-auth/stack-auth PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-04T18:03:49.984Z
Learning: Applies to apps/e2e/**/*.{ts,tsx} : Always add new E2E tests when changing API or SDK interface; err on the side of creating too many tests due to the critical nature of the industry

Applied to files:

  • apps/e2e/tests/backend/endpoints/api/v1/analytics-events.test.ts
📚 Learning: 2025-12-04T18:03:49.984Z
Learnt from: CR
Repo: stack-auth/stack-auth PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-04T18:03:49.984Z
Learning: Applies to **/*.{ts,tsx} : Always add new E2E tests when changing the API or SDK interface

Applied to files:

  • apps/e2e/tests/backend/endpoints/api/v1/analytics-events.test.ts
📚 Learning: 2025-12-17T01:23:15.483Z
Learnt from: N2D4
Repo: stack-auth/stack-auth PR: 1032
File: apps/backend/src/app/api/latest/analytics/query/route.tsx:20-20
Timestamp: 2025-12-17T01:23:15.483Z
Learning: In apps/backend/src/app/api/latest/analytics/query/route.tsx, the include_all_branches field controls which ClickHouse user type is used: when true, use "admin" authType for access to all branches; when false (default), use "external" authType for limited/filtered branch access.

Applied to files:

  • apps/backend/src/lib/events.tsx
  • apps/backend/scripts/clickhouse-migrations.ts
📚 Learning: 2025-12-04T18:03:49.984Z
Learnt from: CR
Repo: stack-auth/stack-auth PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-04T18:03:49.984Z
Learning: Applies to packages/stack-shared/src/config/schema.ts : Whenever making backwards-incompatible changes to the config schema, update the migration functions in packages/stack-shared/src/config/schema.ts

Applied to files:

  • apps/backend/scripts/clickhouse-migrations.ts
🧬 Code graph analysis (2)
apps/e2e/tests/backend/endpoints/api/v1/analytics-events.test.ts (1)
apps/e2e/tests/backend/backend-helpers.ts (3)
  • niceBackendFetch (109-173)
  • backendContext (35-57)
  • bumpEmailAddress (179-187)
apps/backend/scripts/clickhouse-migrations.ts (2)
apps/backend/src/lib/clickhouse.tsx (1)
  • clickhouseAdminClient (21-21)
packages/stack-shared/src/utils/env.tsx (1)
  • getEnvVariable (16-58)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (13)
  • GitHub Check: Vercel Agent Review
  • GitHub Check: build (22.x)
  • GitHub Check: all-good
  • GitHub Check: build (22.x)
  • GitHub Check: setup-tests-with-custom-base-port
  • GitHub Check: lint_and_build (latest)
  • GitHub Check: E2E Tests (Node 22.x, Freestyle mock)
  • GitHub Check: restart-dev-and-test
  • GitHub Check: check_prisma_migrations (22.x)
  • GitHub Check: E2E Tests (Node 22.x, Freestyle prod)
  • GitHub Check: setup-tests
  • GitHub Check: docker
  • GitHub Check: restart-dev-and-test-with-custom-base-port
🔇 Additional comments (6)
apps/e2e/tests/backend/endpoints/api/v1/analytics-events.test.ts (1)

72-127: LGTM! Well-structured tenant isolation test.

The test correctly verifies that analytics events are scoped to projects. The use of inline snapshots follows the coding guideline, and the previous review concern about using fetchEventsWithRetry has been properly addressed.

apps/backend/src/lib/events.tsx (3)

55-55: Good addition for backward compatibility.

The teamId field with .optional().default("") ensures that existing events without this field will still validate correctly, while new events can include team context.


157-170: LGTM! Defensive data extraction is appropriate.

The defensive type checking with fallback values ensures robust event logging even when data doesn't contain expected fields. The explicit type checks prevent runtime errors during ClickHouse insertion.


198-217: LGTM! ClickHouse insertion design is sound.

The implementation correctly uses the admin client for event ingestion (write path), while query operations use appropriate tenant-scoped clients. The fire-and-forget error handling via runAsynchronouslyAndWaitUntil is appropriate for analytics logging, and the async_insert setting optimizes throughput (which explains why the E2E tests use retry logic).

Inserting one row per event type in the inheritance chain is an intentional design that enables flexible querying across different event type hierarchies.

Based on learnings, the admin client usage is correct for the write path.

apps/backend/scripts/clickhouse-migrations.ts (2)

29-47: LGTM: Table schema is well-designed.

The table definition is properly qualified with the analytics database, and the schema includes all necessary fields. The ORDER BY (project_id, branch_id, event_at) clause efficiently supports the row policy filtering, and PARTITION BY toYYYYMM(event_at) is appropriate for time-series analytics data.


19-21: Verify the limited_user role is configured to support custom settings in readonly mode.

The row policy references getSetting('SQL_project_id') and getSetting('SQL_branch_id'), which require two prerequisites:

  1. Custom settings must begin with a prefix declared in the server's custom_settings_prefixes configuration to allow the SQL_ prefix
  2. The limited_user role must configure these custom settings as CHANGEABLE_IN_READONLY to modify them in readonly mode

Ensure your ClickHouse server configuration and role setup include:

  • Server config: <custom_settings_prefixes>SQL_</custom_settings_prefixes> (or similar)
  • Role setup: ALTER ROLE limited_user SETTINGS SQL_project_id CHANGEABLE_IN_READONLY, SQL_branch_id CHANGEABLE_IN_READONLY

Without these settings, the row policy will fail at query time when the limited_user attempts to use getSetting().

Comment on lines +106 to +116
if (isNaN(errorCode)) {
return null;
}
const message = "message" in error && typeof error.message === "string" ? error.message : null;
if (SAFE_CLICKHOUSE_ERROR_CODES.includes(errorCode)) {
return message;
}
if (UNSAFE_CLICKHOUSE_ERROR_CODES.includes(errorCode)) {
return DEFAULT_CLICKHOUSE_ERROR_MESSAGE;
}
return null;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what if it's an error code that's neither safe nor unsafe? we should captureError to sentry and then return DEFAULT_CLICKHOUSE_ERROR_MESSAGE

@cmux-agent
Copy link

cmux-agent bot commented Dec 20, 2025

Older cmux preview screenshots (latest comment is below)

Preview Screenshots

Open Diff Heatmap

Preview screenshots are being captured...

Workspace and dev browser links will appear here once the preview environment is ready.


Generated by cmux preview system

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

♻️ Duplicate comments (2)
apps/backend/src/app/api/latest/internal/clickhouse/migrate-events/route.tsx (1)

41-41: Extract branchId from event data like other fields.

The branchId is hardcoded to DEFAULT_BRANCH_ID instead of being extracted from dataRecord.branchId like the other fields (projectId, userId, teamId, etc.). This causes historical events with non-default branch IDs to be migrated with incorrect data.

🔎 Proposed fix
-  const branchId = DEFAULT_BRANCH_ID;
+  const branchId = typeof dataRecord.branchId === "string" ? dataRecord.branchId : DEFAULT_BRANCH_ID;
apps/backend/scripts/db-migrations.ts (1)

160-160: Clarify that this is the ClickHouse migration.

As mentioned in a previous review, adding a console log here would clarify that this is the ClickHouse migration (vs the Postgres migration above).

🔎 Suggested logging enhancement
  console.log('='.repeat(60) + '\n');

+ console.log('[Migration] Running ClickHouse migrations...');
  await runClickhouseMigrations();

Based on past review feedback requesting clarification between Postgres and ClickHouse migrations.

🧹 Nitpick comments (2)
apps/backend/src/app/api/latest/internal/analytics/query/route.ts (1)

65-66: Consider wrapping post-query operations in error handling.

If resultSet.data.json() or getQueryTimingStats() fails, raw errors will propagate. While less critical than query errors, wrapping these in try-catch and mapping to known errors would provide more consistent error responses.

apps/backend/src/app/api/latest/internal/clickhouse/migrate-events/route.tsx (1)

1-1: Remove unused import.

The createClickhouseClient function is imported but never used in this file. Only clickhouseAdminClient is used at line 147.

🔎 Proposed fix
-import { clickhouseAdminClient, createClickhouseClient } from "@/lib/clickhouse";
+import { clickhouseAdminClient } from "@/lib/clickhouse";
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ba8110f and 136b25f.

📒 Files selected for processing (7)
  • apps/backend/scripts/clickhouse-migrations.ts (1 hunks)
  • apps/backend/scripts/db-migrations.ts (3 hunks)
  • apps/backend/src/app/api/latest/internal/analytics/query/route.ts (1 hunks)
  • apps/backend/src/app/api/latest/internal/clickhouse/migrate-events/route.tsx (1 hunks)
  • apps/backend/src/lib/events.tsx (4 hunks)
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-events.test.ts (1 hunks)
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • apps/backend/scripts/clickhouse-migrations.ts
🧰 Additional context used
📓 Path-based instructions (7)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{ts,tsx}: Always add new E2E tests when changing the API or SDK interface
For blocking alerts and errors, never use toast; use alerts instead as they are less easily missed by the user
NEVER try-catch-all, NEVER void a promise, and NEVER .catch(console.error); use loading indicators and async callbacks instead, or use runAsynchronously/runAsynchronouslyWithAlert for error handling
Use ES6 maps instead of records wherever you can

Files:

  • apps/backend/scripts/db-migrations.ts
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-events.test.ts
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
  • apps/backend/src/app/api/latest/internal/analytics/query/route.ts
  • apps/backend/src/app/api/latest/internal/clickhouse/migrate-events/route.tsx
  • apps/backend/src/lib/events.tsx
**/*.{ts,tsx,css}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{ts,tsx,css}: Keep hover/click transitions snappy and fast; avoid fade-in delays on hover. Apply transitions after action completion instead, like smooth fade-out when hover ends
Use hover-exit transitions instead of hover-enter transitions; for example, use 'transition-colors hover:transition-none' instead of fade-in on hover

Files:

  • apps/backend/scripts/db-migrations.ts
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-events.test.ts
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
  • apps/backend/src/app/api/latest/internal/analytics/query/route.ts
  • apps/backend/src/app/api/latest/internal/clickhouse/migrate-events/route.tsx
  • apps/backend/src/lib/events.tsx
apps/**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

NEVER use Next.js dynamic functions if you can avoid them; prefer using client components and hooks like usePathname instead of await params to keep pages static

Files:

  • apps/backend/scripts/db-migrations.ts
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-events.test.ts
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
  • apps/backend/src/app/api/latest/internal/analytics/query/route.ts
  • apps/backend/src/app/api/latest/internal/clickhouse/migrate-events/route.tsx
  • apps/backend/src/lib/events.tsx
{.env*,**/*.{ts,tsx,js}}

📄 CodeRabbit inference engine (AGENTS.md)

Prefix environment variables with STACK_ (or NEXT_PUBLIC_STACK_ if public) so changes are picked up by Turborepo and improves readability

Files:

  • apps/backend/scripts/db-migrations.ts
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-events.test.ts
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
  • apps/backend/src/app/api/latest/internal/analytics/query/route.ts
  • apps/backend/src/app/api/latest/internal/clickhouse/migrate-events/route.tsx
  • apps/backend/src/lib/events.tsx
**/*.test.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

Prefer .toMatchInlineSnapshot over other selectors in tests when possible; check snapshot-serializer.ts for formatting details

Files:

  • apps/e2e/tests/backend/endpoints/api/v1/analytics-events.test.ts
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
apps/e2e/**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

Always add new E2E tests when changing API or SDK interface; err on the side of creating too many tests due to the critical nature of the industry

Files:

  • apps/e2e/tests/backend/endpoints/api/v1/analytics-events.test.ts
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
apps/**/*.tsx

📄 CodeRabbit inference engine (AGENTS.md)

Check existing apps for inspiration when implementing new apps or pages; update apps-frontend.tsx and apps-config.ts to add new apps

Files:

  • apps/backend/src/app/api/latest/internal/clickhouse/migrate-events/route.tsx
  • apps/backend/src/lib/events.tsx
🧠 Learnings (6)
📚 Learning: 2025-12-04T18:03:49.984Z
Learnt from: CR
Repo: stack-auth/stack-auth PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-04T18:03:49.984Z
Learning: Applies to packages/stack-shared/src/config/schema.ts : Whenever making backwards-incompatible changes to the config schema, update the migration functions in packages/stack-shared/src/config/schema.ts

Applied to files:

  • apps/backend/scripts/db-migrations.ts
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
📚 Learning: 2025-12-04T18:03:49.984Z
Learnt from: CR
Repo: stack-auth/stack-auth PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-04T18:03:49.984Z
Learning: Applies to apps/e2e/**/*.{ts,tsx} : Always add new E2E tests when changing API or SDK interface; err on the side of creating too many tests due to the critical nature of the industry

Applied to files:

  • apps/e2e/tests/backend/endpoints/api/v1/analytics-events.test.ts
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
📚 Learning: 2025-12-04T18:03:49.984Z
Learnt from: CR
Repo: stack-auth/stack-auth PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-04T18:03:49.984Z
Learning: Applies to **/*.{ts,tsx} : Always add new E2E tests when changing the API or SDK interface

Applied to files:

  • apps/e2e/tests/backend/endpoints/api/v1/analytics-events.test.ts
  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
📚 Learning: 2025-12-04T18:03:49.984Z
Learnt from: CR
Repo: stack-auth/stack-auth PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-04T18:03:49.984Z
Learning: Applies to **/*.test.{ts,tsx} : Prefer .toMatchInlineSnapshot over other selectors in tests when possible; check snapshot-serializer.ts for formatting details

Applied to files:

  • apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts
📚 Learning: 2025-12-17T01:23:15.483Z
Learnt from: N2D4
Repo: stack-auth/stack-auth PR: 1032
File: apps/backend/src/app/api/latest/analytics/query/route.tsx:20-20
Timestamp: 2025-12-17T01:23:15.483Z
Learning: In apps/backend/src/app/api/latest/analytics/query/route.tsx, the include_all_branches field controls which ClickHouse user type is used: when true, use "admin" authType for access to all branches; when false (default), use "external" authType for limited/filtered branch access.

Applied to files:

  • apps/backend/src/app/api/latest/internal/analytics/query/route.ts
  • apps/backend/src/app/api/latest/internal/clickhouse/migrate-events/route.tsx
  • apps/backend/src/lib/events.tsx
📚 Learning: 2025-12-04T18:03:49.984Z
Learnt from: CR
Repo: stack-auth/stack-auth PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-04T18:03:49.984Z
Learning: Applies to **/*.{ts,tsx} : NEVER try-catch-all, NEVER void a promise, and NEVER .catch(console.error); use loading indicators and async callbacks instead, or use runAsynchronously/runAsynchronouslyWithAlert for error handling

Applied to files:

  • apps/backend/src/app/api/latest/internal/analytics/query/route.ts
🧬 Code graph analysis (3)
apps/backend/scripts/db-migrations.ts (3)
apps/backend/src/lib/clickhouse.tsx (1)
  • createClickhouseClient (12-19)
apps/backend/src/prisma-client.tsx (1)
  • sqlQuoteIdent (429-431)
apps/backend/scripts/clickhouse-migrations.ts (1)
  • runClickhouseMigrations (4-29)
apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts (3)
apps/e2e/tests/backend/backend-helpers.ts (1)
  • niceBackendFetch (109-173)
apps/e2e/tests/helpers.ts (1)
  • it (12-12)
packages/stack-shared/src/utils/strings.tsx (1)
  • deindent (235-238)
apps/backend/src/app/api/latest/internal/analytics/query/route.ts (4)
packages/stack-shared/src/utils/errors.tsx (1)
  • StackAssertionError (69-85)
apps/backend/src/lib/clickhouse.tsx (2)
  • clickhouseExternalClient (22-22)
  • getQueryTimingStats (24-60)
packages/stack-shared/src/utils/results.tsx (1)
  • error (36-41)
packages/stack-shared/src/known-errors.tsx (1)
  • errorCode (64-66)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (13)
  • GitHub Check: Vercel Agent Review
  • GitHub Check: build (22.x)
  • GitHub Check: E2E Tests (Node 22.x, Freestyle prod)
  • GitHub Check: restart-dev-and-test
  • GitHub Check: E2E Tests (Node 22.x, Freestyle mock)
  • GitHub Check: build (22.x)
  • GitHub Check: restart-dev-and-test-with-custom-base-port
  • GitHub Check: check_prisma_migrations (22.x)
  • GitHub Check: docker
  • GitHub Check: lint_and_build (latest)
  • GitHub Check: setup-tests-with-custom-base-port
  • GitHub Check: all-good
  • GitHub Check: setup-tests
🔇 Additional comments (28)
apps/backend/src/app/api/latest/internal/analytics/query/route.ts (3)

1-7: LGTM on imports and setup.

Clean imports and good use of the shared singleton clickhouseExternalClient which addresses previous resource leak concerns.


9-33: Schema and response structure look good.

The endpoint correctly:

  • Requires admin auth via adminAuthTypeSchema
  • Sets sensible defaults (timeout_ms: 10_000, min: 1_000)
  • Defines proper response schema with result and stats

34-55: Good security hardening on ClickHouse query execution.

The query settings properly enforce:

  • Tenant scoping via SQL_project_id and SQL_branch_id
  • Read-only mode (readonly: "1")
  • DDL prohibition (allow_ddl: 0)
  • Result size limits (max_result_rows, max_result_bytes)
  • Timeout via max_execution_time

The explicit throw for unimplemented include_all_branches is a reasonable interim approach.

apps/e2e/tests/backend/endpoints/api/v1/analytics-events.test.ts (5)

5-26: Clean helper function for querying events.

The queryEvents helper properly constructs parameterized queries with optional filters. Good use of template strings for conditional WHERE clauses.


28-49: Good retry helper implementation.

The fetchEventsWithRetry helper correctly handles async event propagation with configurable attempts and delays. The early-exit on non-200 status and result availability is appropriate.


52-70: Good coverage for event storage verification.

Test properly validates that backend events are stored in ClickHouse with correct fields (event_type, project_id, branch_id, user_id).


72-127: Excellent cross-project isolation test.

This test properly validates that events from project B are not visible when querying from project A context. Good use of inline snapshots per coding guidelines.


129-152: Good per-user filtering test.

Validates that events are correctly filtered by user within a project, ensuring proper user-level isolation.

apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts (13)

1-15: Good test helper setup.

The runQuery helper properly creates a fresh project context for each test and executes queries with admin access.


17-77: Good basic query and parameter tests.

Tests cover essential functionality: basic queries, parameterized queries, and custom timeouts. Inline snapshots properly strip dynamic timing fields.


79-125: Good validation and error handling tests.

Tests properly validate required fields and error propagation for invalid SQL.


178-198: Good timeout behavior test.

Validates that custom timeout triggers proper error response with elapsed time stripped in snapshot.


226-333: Excellent DDL/DML restriction tests.

Comprehensive coverage for CREATE TABLE, INSERT, and settings modification restrictions. The readonly mode enforcement is properly validated.


335-423: Good privilege and grants verification.

Tests properly validate the restricted user context and limited grants configuration.


490-509: Good SQL injection protection test.

Properly validates that parameterized queries prevent SQL injection.


553-677: Excellent access control tests for subqueries and table functions.

Comprehensive coverage for UNION attacks, file/network access, remote/cluster table access. All properly blocked.


679-727: Good multi-statement and CTE bypass prevention tests.

Validates that common bypass attempts are properly blocked.


729-811: Good database and table access restriction tests.

Validates restrictions on other databases, information_schema, dictionaries, and query log.


813-958: Comprehensive privilege and DDL restriction tests.

Good coverage for GRANT, CREATE FUNCTION, S3 access, executables, comment obfuscation, and materialized views.


981-1127: Good DML restriction tests.

Validates DROP, ALTER, TRUNCATE, DELETE, UPDATE restrictions and sensitive system table access blocks.


1129-1253: Good table function restriction tests.

Validates restrictions on input(), generateRandom(), numbers(), jdbc(), mysql(), and postgresql() table functions. The result limit errors confirm proper safeguards are in place.

apps/backend/src/app/api/latest/internal/clickhouse/migrate-events/route.tsx (3)

62-103: Request and response schemas are well-structured.

The schemas properly validate internal-only access, time ranges, pagination cursor, and provide comprehensive migration metrics in the response.


157-160: Verify async_insert setting is appropriate for migration.

The async_insert: 1 setting enables asynchronous inserts in ClickHouse, which improves throughput but means data isn't immediately persisted. If the service crashes before ClickHouse flushes the buffer, some migrated events could be lost.

For a one-time migration, this might be acceptable, but verify:

  1. Whether migration progress tracking accounts for this eventual consistency
  2. Whether the migration UI shows accurate progress despite async inserts
  3. Whether retrying a failed migration batch would handle partial inserts correctly

146-164: Add deduplication logic or use ReplacingMergeTree to prevent duplicate inserts on repeated migrations.

The migration endpoint uses cursor-based pagination (suggesting it's designed for resumable backfills), but has no deduplication. If called multiple times with overlapping time ranges, it will insert duplicate rows into the ClickHouse analytics_internal.events table. The table uses standard MergeTree engine (not ReplacingMergeTree), which allows duplicates by design.

Consider one of:

  1. Document that migrations must use non-overlapping time ranges and never be re-run for the same range
  2. Change the table engine to ReplacingMergeTree with deduplication key
  3. Add logic to check if rows already exist in ClickHouse before inserting, updating skipped_existing_events accordingly
apps/backend/src/lib/events.tsx (3)

55-55: LGTM: teamId field addition.

The optional teamId field with a default empty string is appropriate for backward compatibility with existing events while enabling team-level analytics.


157-170: LGTM: Event data preparation is thorough.

The extraction of contextual fields (projectId, branchId, userId, teamId, sessionId, isAnonymous) with appropriate defaults ensures robust event data for ClickHouse ingestion.


200-200: Verify the ClickHouse table name matches the migration setup.

This code writes to analytics_internal.events, but based on the migration script snippets, tables are created in the default database (creating default.events). Ensure the table name here matches what's created during migrations.

apps/backend/scripts/db-migrations.ts (1)

21-22: No action needed. The database naming is consistent and intentional.

The code creates an analytics_internal database containing the actual events table, with a default.events VIEW that exposes the data with row-level security. This architecture—separating internal storage from user-facing abstraction—is deliberate. There is no separate analytics database; the reference to one in the original comment appears to be based on a misunderstanding of the migration structure.

Likely an incorrect or invalid review comment.

import { getEnvVariable } from "@stackframe/stack-shared/dist/utils/env";
import { runClickhouseMigrations } from "./clickhouse-migrations";

const clickhouseClient = createClickhouseClient("admin");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Close the ClickHouse client to prevent resource leaks.

The clickhouseClient is created at module scope but never explicitly closed. Unlike clickhouseAdminClient which appears to be a singleton managed elsewhere, this instance should be closed after use or you should reuse the existing clickhouseAdminClient.

🔎 Proposed fix: reuse the existing admin client
-const clickhouseClient = createClickhouseClient("admin");
-
 const dropSchema = async () => {
   await globalPrismaClient.$executeRaw(Prisma.sql`DROP SCHEMA ${sqlQuoteIdent(globalPrismaSchema)} CASCADE`);
   await globalPrismaClient.$executeRaw(Prisma.sql`CREATE SCHEMA ${sqlQuoteIdent(globalPrismaSchema)}`);
   await globalPrismaClient.$executeRaw(Prisma.sql`GRANT ALL ON SCHEMA ${sqlQuoteIdent(globalPrismaSchema)} TO postgres`);
   await globalPrismaClient.$executeRaw(Prisma.sql`GRANT ALL ON SCHEMA ${sqlQuoteIdent(globalPrismaSchema)} TO public`);
-  await clickhouseClient.command({ query: "DROP DATABASE IF EXISTS analytics_internal" });
-  await clickhouseClient.command({ query: "CREATE DATABASE IF NOT EXISTS analytics_internal" });
+  await clickhouseAdminClient.command({ query: "DROP DATABASE IF EXISTS analytics_internal" });
+  await clickhouseAdminClient.command({ query: "CREATE DATABASE IF NOT EXISTS analytics_internal" });
 };

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In apps/backend/scripts/db-migrations.ts around line 14 the code creates a
clickhouseClient at module scope (const clickhouseClient =
createClickhouseClient("admin");) but never closes it; either reuse the existing
clickhouseAdminClient instead of creating a new client or ensure this client is
explicitly closed when migrations finish (e.g., use the shared
clickhouseAdminClient import or call the client's close/dispose method in a
finally/exit handler after all operations complete) to prevent resource leaks.

Comment on lines +101 to +117
function getSafeClickhouseErrorMessage(error: unknown): string | null {
if (typeof error !== "object" || error === null || !("code" in error) || typeof error.code !== "string") {
return null;
}
const errorCode = Number(error.code);
if (isNaN(errorCode)) {
return null;
}
const message = "message" in error && typeof error.message === "string" ? error.message : null;
if (SAFE_CLICKHOUSE_ERROR_CODES.includes(errorCode)) {
return message;
}
if (UNSAFE_CLICKHOUSE_ERROR_CODES.includes(errorCode)) {
return DEFAULT_CLICKHOUSE_ERROR_MESSAGE;
}
return null;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Unknown error codes should log to Sentry before returning a generic message.

Per prior review feedback from N2D4, when an error code is neither in SAFE_CLICKHOUSE_ERROR_CODES nor UNSAFE_CLICKHOUSE_ERROR_CODES, the function returns null which triggers a StackAssertionError. This causes the error to be logged but also returns an internal error to the user.

Instead, unknown codes should log to Sentry for investigation but return the safe default message to the user.

🔎 Proposed fix
+import { captureError } from "@stackframe/stack-shared/dist/utils/errors";
+
 function getSafeClickhouseErrorMessage(error: unknown): string | null {
   if (typeof error !== "object" || error === null || !("code" in error) || typeof error.code !== "string") {
     return null;
   }
   const errorCode = Number(error.code);
   if (isNaN(errorCode)) {
     return null;
   }
   const message = "message" in error && typeof error.message === "string" ? error.message : null;
   if (SAFE_CLICKHOUSE_ERROR_CODES.includes(errorCode)) {
     return message;
   }
   if (UNSAFE_CLICKHOUSE_ERROR_CODES.includes(errorCode)) {
     return DEFAULT_CLICKHOUSE_ERROR_MESSAGE;
   }
-  return null;
+  // Unknown error code - log to Sentry for investigation but return safe message to user
+  captureError("unknown-clickhouse-error-code", new Error(`Unknown ClickHouse error code: ${errorCode}`), { error });
+  return DEFAULT_CLICKHOUSE_ERROR_MESSAGE;
 }
🤖 Prompt for AI Agents
In apps/backend/src/app/api/latest/internal/analytics/query/route.ts around
lines 101 to 117, the function currently returns null for ClickHouse error codes
that are neither in SAFE_CLICKHOUSE_ERROR_CODES nor
UNSAFE_CLICKHOUSE_ERROR_CODES; instead capture those unknown errors with Sentry
(e.g., Sentry.captureException or the project error-logging helper) including
the full error object and any contextual info (query, params, request id), then
return DEFAULT_CLICKHOUSE_ERROR_MESSAGE to the caller; ensure the Sentry call
runs only for the unknown branch and does not change the function signature or
throw.

Comment on lines +62 to +208
export const POST = createSmartRouteHandler({
metadata: {
summary: "Migrate analytics events from Postgres to ClickHouse",
description: "Internal-only endpoint to backfill existing events into ClickHouse.",
hidden: true,
},
request: yupObject({
auth: yupObject({
project: yupObject({
id: yupString().oneOf(["internal"]).defined(),
}).defined(),
user: yupObject({
id: yupString().defined(),
}).optional(),
}).defined(),
body: yupObject({
min_created_at_millis: yupNumber().integer().defined(),
max_created_at_millis: yupNumber().integer().defined(),
cursor: yupObject({
created_at_millis: yupNumber().integer().defined(),
id: yupString().uuid().defined(),
}).optional(),
limit: yupNumber().integer().min(1).default(1000),
}).defined(),
}),
response: yupObject({
statusCode: yupNumber().oneOf([200]).defined(),
bodyType: yupString().oneOf(["json"]).defined(),
body: yupObject({
total_events: yupNumber().defined(),
processed_events: yupNumber().defined(),
remaining_events: yupNumber().defined(),
migrated_events: yupNumber().defined(),
skipped_existing_events: yupNumber().defined(),
inserted_rows: yupNumber().defined(),
progress: yupNumber().min(0).max(1).defined(),
next_cursor: yupObject({
created_at_millis: yupNumber().integer().defined(),
id: yupString().defined(),
}).nullable().defined(),
}).defined(),
}),
async handler({ body }) {
const minCreatedAt = parseMillisOrThrow(body.min_created_at_millis, "min_created_at_millis");
const maxCreatedAt = parseMillisOrThrow(body.max_created_at_millis, "max_created_at_millis");
if (minCreatedAt >= maxCreatedAt) {
throw new StatusError(400, "min_created_at_millis must be before max_created_at_millis");
}
const cursorCreatedAt = body.cursor ? parseMillisOrThrow(body.cursor.created_at_millis, "cursor.created_at_millis") : undefined;
const cursorId = body.cursor?.id;
const limit = body.limit;

const baseWhere: Prisma.EventWhereInput = {
createdAt: {
gte: minCreatedAt,
lt: maxCreatedAt,
},
};

const cursorFilter: Prisma.EventWhereInput | undefined = (cursorCreatedAt && cursorId) ? {
OR: [
{ createdAt: { gt: cursorCreatedAt } },
{ createdAt: cursorCreatedAt, id: { gt: cursorId } },
],
} : undefined;

const where: Prisma.EventWhereInput = cursorFilter
? { AND: [baseWhere, cursorFilter] }
: baseWhere;

const totalEvents = await globalPrismaClient.event.count({ where: baseWhere });

const events = await globalPrismaClient.event.findMany({
where,
orderBy: [
{ createdAt: "asc" },
{ id: "asc" },
],
take: limit,
});

let insertedRows = 0;
let migratedEvents = 0;

if (events.length) {
const clickhouseClient = clickhouseAdminClient;
const rowsByEvent = events.map(createClickhouseRows);
const rowsToInsert = rowsByEvent.flat();
migratedEvents = rowsByEvent.reduce((acc, rows) => acc + (rows.length ? 1 : 0), 0);

if (rowsToInsert.length) {
await clickhouseClient.insert({
table: "analytics_internal.events",
values: rowsToInsert,
format: "JSONEachRow",
clickhouse_settings: {
date_time_input_format: "best_effort",
async_insert: 1,
},
});
insertedRows = rowsToInsert.length;
}
}

const lastEvent = events.at(-1);
const nextCursor: Cursor | null = lastEvent ? {
created_at_millis: lastEvent.createdAt.getTime(),
id: lastEvent.id,
} : null;
const progressCursor: Cursor | null = nextCursor ?? (cursorCreatedAt && body.cursor ? {
created_at_millis: body.cursor.created_at_millis,
id: body.cursor.id,
} : null);

const progressCursorCreatedAt = progressCursor ? new Date(progressCursor.created_at_millis) : null;
const remainingWhere = progressCursor ? {
AND: [
baseWhere,
{
OR: [
{ createdAt: { gt: progressCursorCreatedAt! } },
{ createdAt: progressCursorCreatedAt!, id: { gt: progressCursor.id } },
],
},
],
} : baseWhere;

const remainingEvents = await globalPrismaClient.event.count({ where: remainingWhere });
const processedEvents = totalEvents - remainingEvents;
const progress = totalEvents === 0 ? 1 : processedEvents / totalEvents;

return {
statusCode: 200,
bodyType: "json",
body: {
total_events: totalEvents,
processed_events: processedEvents,
remaining_events: remainingEvents,
migrated_events: migratedEvents,
skipped_existing_events: 0,
inserted_rows: insertedRows,
progress,
next_cursor: nextCursor,
},
};
},
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Search for E2E tests related to the migrate-events endpoint

# List files in apps/e2e directory
echo "=== E2E test directory structure ==="
find apps/e2e -type f -name "*.ts" -o -name "*.tsx" -o -name "*.js" 2>/dev/null | head -20

# Search for migrate-events references in test files
echo ""
echo "=== Search for migrate-events in test files ==="
rg "migrate-events|migrateEvents" apps/e2e --type ts --type tsx 2>/dev/null || echo "No matches found"

# Search for clickhouse migration tests
echo ""
echo "=== Search for clickhouse tests ==="
rg "clickhouse" apps/e2e --type ts --type tsx -i 2>/dev/null || echo "No matches found"

Repository: stack-auth/stack-auth

Length of output: 1609


🏁 Script executed:

#!/bin/bash
# Check the contents of the migration-tests.test.ts file
head -50 apps/e2e/tests/backend/endpoints/api/migration-tests.test.ts

Repository: stack-auth/stack-auth

Length of output: 2554


Add E2E tests for the migration endpoint.

The repository requires endpoint tests in apps/e2e when the API is changed. Create tests covering successful migration with pagination, error handling for invalid timestamps (min ≥ max), cursor-based pagination behavior, progress calculation accuracy, and empty result handling.

🤖 Prompt for AI Agents
In apps/backend/src/app/api/latest/internal/clickhouse/migrate-events/route.tsx
around lines 62-208 the PR added/changed an internal migration endpoint but the
repo requires E2E tests when API behavior changes; add tests under apps/e2e that
exercise this endpoint. Create tests that: call POST
/api/latest/internal/clickhouse/migrate-events with valid min/max timestamps and
assert successful migration response shape (total_events, processed_events,
remaining_events, migrated_events, inserted_rows, progress between 0 and 1,
next_cursor nullable); test pagination by seeding multiple events, calling with
limit and cursor to verify cursor advances and subsequent page returns expected
processed/migrated counts; test error handling by POSTing min_created_at_millis
>= max_created_at_millis and expect 400; test empty-range handling (no events)
returns total_events 0, progress 1, next_cursor null; ensure tests seed/cleanup
Prisma test DB and either stub/mock ClickHouse insert or assert on side-effects
(e.g., migrated_events/inserted_rows) deterministically; include fixtures for
events with createdAt and id, and run tests in apps/e2e test suite with
appropriate auth (internal project id).

Comment on lines +198 to +217
const clickhouseClient = clickhouseAdminClient;
await clickhouseClient.insert({
table: "analytics_internal.events",
values: eventTypesArray.map(eventType => ({
event_type: eventType.id,
event_at: timeRange.end,
data: clickhouseEventData,
project_id: projectId,
branch_id: branchId,
user_id: userId,
team_id: teamId,
is_anonymous: isAnonymous,
session_id: sessionId,
})),
format: "JSONEachRow",
clickhouse_settings: {
date_time_input_format: "best_effort",
async_insert: 1,
},
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add error handling to prevent ClickHouse failures from disrupting event logging.

The ClickHouse insert is awaited without a try-catch block inside runAsynchronouslyAndWaitUntil. If ClickHouse is unavailable or the insert fails, it could disrupt the entire event logging chain (including PostHog). Consider isolating ClickHouse failures so DB and PostHog logging succeed independently.

🔎 Proposed fix with isolated error handling
    });

+   // log event in ClickHouse (isolated to prevent disrupting DB/PostHog logging)
+   try {
-     const clickhouseClient = clickhouseAdminClient;
-     await clickhouseClient.insert({
-       table: "analytics_internal.events",
-       values: eventTypesArray.map(eventType => ({
-         event_type: eventType.id,
-         event_at: timeRange.end,
-         data: clickhouseEventData,
-         project_id: projectId,
-         branch_id: branchId,
-         user_id: userId,
-         team_id: teamId,
-         is_anonymous: isAnonymous,
-         session_id: sessionId,
-       })),
-       format: "JSONEachRow",
-       clickhouse_settings: {
-         date_time_input_format: "best_effort",
-         async_insert: 1,
-       },
-     });
+     const clickhouseClient = clickhouseAdminClient;
+     await clickhouseClient.insert({
+       table: "analytics_internal.events",
+       values: eventTypesArray.map(eventType => ({
+         event_type: eventType.id,
+         event_at: timeRange.end,
+         data: clickhouseEventData,
+         project_id: projectId,
+         branch_id: branchId,
+         user_id: userId,
+         team_id: teamId,
+         is_anonymous: isAnonymous,
+         session_id: sessionId,
+       })),
+       format: "JSONEachRow",
+       clickhouse_settings: {
+         date_time_input_format: "best_effort",
+         async_insert: 1,
+       },
+     });
+   } catch (error) {
+     console.error('[Events] Failed to log event to ClickHouse:', error);
+   }

    // log event in PostHog

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In apps/backend/src/lib/events.tsx around lines 198 to 217, the awaited
ClickHouse insert is not wrapped in error handling which can cause ClickHouse
failures to abort the overall event logging flow; wrap the
clickhouseClient.insert call in a try-catch, log the error (including
error.message/stack and contextual info like projectId/branchId/event types)
using the existing logger, and do not rethrow so execution continues to
PostHog/DB logging; ensure the catch is narrow (only around the insert) so other
upstream errors still surface.

Comment on lines +213 to +216
clickhouse_settings: {
date_time_input_format: "best_effort",
async_insert: 1,
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Add wait_for_async_insert: 1 to async insert configuration.

The async_insert: 1 setting is appropriate for analytics events, but it must be paired with wait_for_async_insert: 1. Without it, the server uses fire-and-forget mode, which means your application won't receive feedback if inserts fail and data loss can occur silently. ClickHouse's strong recommendation is to use async_insert=1 with wait_for_async_insert=1 for asynchronous inserts. Data is not searchable by queries before being flushed to storage, but with proper configuration, this delay is minimal and acceptable for analytics use cases.

🤖 Prompt for AI Agents
In apps/backend/src/lib/events.tsx around lines 213 to 216, the ClickHouse async
insert config sets async_insert: 1 but omits wait_for_async_insert: 1; add
wait_for_async_insert: 1 to the clickhouse_settings object so async inserts are
performed in non-fire-and-forget mode and the client receives failure feedback,
keeping the rest of the settings intact.

Comment on lines +450 to +468
it("SHOW TABLES should be empty because current database is empty", async ({ expect }) => {
const response = await runQuery({
query: "SHOW TABLES",
});

expect(response).toMatchInlineSnapshot(`
NiceResponse {
"status": 200,
"body": {
"result": [{ "name": "events" }],
"stats": {
"cpu_time": <stripped field 'cpu_time'>,
"wall_clock_time": <stripped field 'wall_clock_time'>,
},
},
"headers": Headers { <some fields may have been hidden> },
}
`);
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Test name doesn't match behavior.

The test is named "SHOW TABLES should be empty because current database is empty" but the snapshot shows it returns [{ "name": "events" }], which is not empty. The test name should be corrected.

🔎 Proposed fix
-it("SHOW TABLES should be empty because current database is empty", async ({ expect }) => {
+it("SHOW TABLES returns only the events table", async ({ expect }) => {
🤖 Prompt for AI Agents
In apps/e2e/tests/backend/endpoints/api/v1/analytics-query.test.ts around lines
450 to 468, the test name says the database is empty but the snapshot shows a
table named "events"; update the test description to match observed behavior
(e.g., "SHOW TABLES should return events table when database contains events")
or if intended to assert empty, change the snapshot/expectation to an empty
result; ensure the test name and its expected snapshot are consistent.

}
const message = "message" in error && typeof error.message === "string" ? error.message : null;
if (SAFE_CLICKHOUSE_ERROR_CODES.includes(errorCode)) {
return message;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
return message;
return message ?? DEFAULT_CLICKHOUSE_ERROR_MESSAGE;

Safe ClickHouse error codes without a message property will throw a StackAssertionError instead of the expected AnalyticsQueryError, exposing internal error details to users instead of a user-friendly error message.

View Details

Analysis

Safe ClickHouse error codes without a message property throw StackAssertionError instead of AnalyticsQueryError

What fails: The getSafeClickhouseErrorMessage() function in apps/backend/src/app/api/latest/internal/analytics/query/route.ts returns null when encountering a safe ClickHouse error code (like SYNTAX_ERROR=62, TIMEOUT_EXCEEDED=159) that lacks a message property. This causes the error handler at lines 59-62 to throw a StackAssertionError with the full error object as context, exposing internal error details instead of returning a user-friendly AnalyticsQueryError.

How to reproduce:

// Error object with safe error code but no message property
const error = { code: "62" }; // SYNTAX_ERROR without message
const message = getSafeClickhouseErrorMessage(error);
// Returns null instead of a default message

This causes the handler to execute:

if (message === null) {
  throw new StackAssertionError("Unknown Clickhouse error", { cause: resultSet.error });
}

Result: Returns StackAssertionError with full error object exposed to user

Expected: Should return AnalyticsQueryError with user-friendly message like other safe errors do

Fix applied: Changed line 111 from return message; to return message ?? DEFAULT_CLICKHOUSE_ERROR_MESSAGE; to ensure safe error codes without a message property fall back to the default message, maintaining consistency with unsafe error code handling and preventing exposure of internal error details.

@cmux-agent
Copy link

cmux-agent bot commented Dec 20, 2025

Preview Screenshots

Open Diff Heatmap

Preview screenshots are being captured...

Workspace and dev browser links will appear here once the preview environment is ready.


Generated by cmux preview system

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (3)
apps/backend/scripts/clickhouse-migrations.ts (3)

6-6: Critical: Closing the singleton admin client breaks subsequent usage.

The code uses clickhouseAdminClient (a module-level singleton exported from @/lib/clickhouse and used elsewhere in the codebase, e.g., apps/backend/src/lib/events.tsx line 198), but then closes it at line 28. This will break all subsequent operations that depend on this shared client.

🔎 Proposed fix

Since this is a one-off migration script, either:

Option 1 (Recommended): Create a dedicated client instance for migrations and close it:

-  const client = clickhouseAdminClient;
+  const client = createClickhouseClient("admin");

Then keep the client.close() at line 28.

Option 2: If using the singleton intentionally, remove the close call:

   console.log("[Clickhouse] Clickhouse migrations complete");
-  await client.close();

Also applies to: 28-28


4-29: Critical: Missing error handling causes resource leaks and silent failures.

The migration function lacks try-catch-finally error handling. If any client.exec() call fails, the client connection will leak (won't be closed), and the error won't be properly logged or propagated.

🔎 Proposed fix

Wrap the migration logic in try-finally to ensure cleanup:

 export async function runClickhouseMigrations() {
   console.log("[Clickhouse] Running Clickhouse migrations...");
   const client = clickhouseAdminClient;
-  const clickhouseExternalPassword = getEnvVariable("STACK_CLICKHOUSE_EXTERNAL_PASSWORD");
-  await client.exec({
-    query: "CREATE USER IF NOT EXISTS limited_user IDENTIFIED WITH sha256_password BY {clickhouseExternalPassword:String}",
-    query_params: { clickhouseExternalPassword },
-  });
-  // todo: create migration files
-  await client.exec({ query: EXTERNAL_ANALYTICS_DB_SQL });
-  await client.exec({ query: EVENTS_TABLE_BASE_SQL });
-  await client.exec({ query: EVENTS_VIEW_SQL });
-  const queries = [
-    "REVOKE ALL PRIVILEGES ON *.* FROM limited_user;",
-    "REVOKE ALL FROM limited_user;",
-    "GRANT SELECT ON default.events TO limited_user;",
-  ];
-  await client.exec({
-    query: "CREATE ROW POLICY IF NOT EXISTS events_project_isolation ON default.events FOR SELECT USING project_id = getSetting('SQL_project_id') AND branch_id = getSetting('SQL_branch_id') TO limited_user",
-  });
-  for (const query of queries) {
-    await client.exec({ query });
+  try {
+    const clickhouseExternalPassword = getEnvVariable("STACK_CLICKHOUSE_EXTERNAL_PASSWORD");
+    await client.exec({
+      query: "CREATE USER IF NOT EXISTS limited_user IDENTIFIED WITH sha256_password BY {clickhouseExternalPassword:String}",
+      query_params: { clickhouseExternalPassword },
+    });
+    // todo: create migration files
+    await client.exec({ query: EXTERNAL_ANALYTICS_DB_SQL });
+    await client.exec({ query: EVENTS_TABLE_BASE_SQL });
+    await client.exec({ query: EVENTS_VIEW_SQL });
+    const queries = [
+      "REVOKE ALL PRIVILEGES ON *.* FROM limited_user;",
+      "REVOKE ALL FROM limited_user;",
+      "GRANT SELECT ON default.events TO limited_user;",
+    ];
+    await client.exec({
+      query: "CREATE ROW POLICY IF NOT EXISTS events_project_isolation ON default.events FOR SELECT USING project_id = getSetting('SQL_project_id') AND branch_id = getSetting('SQL_branch_id') TO limited_user",
+    });
+    for (const query of queries) {
+      await client.exec({ query });
+    }
+    console.log("[Clickhouse] Clickhouse migrations complete");
+  } catch (error) {
+    console.error("[Clickhouse] Migration failed:", error);
+    throw error;
+  } finally {
+    await client.close();
   }
-  console.log("[Clickhouse] Clickhouse migrations complete");
-  await client.close();
 }

Note: This still has the singleton closing issue from the previous comment.


21-23: Major: Row policy relies on custom settings without proper configuration.

The row policy uses getSetting('SQL_project_id') and getSetting('SQL_branch_id'), but there's no configuration to:

  1. Allow these custom settings to be changed in readonly mode
  2. Create a role with CHANGEABLE_IN_READONLY constraints for these settings
  3. Verify that server-side custom_settings_prefixes allows the SQL_ prefix

Without this configuration, the limited_user won't be able to set these custom settings, and the row policy won't function correctly, breaking tenant isolation.

🔎 Recommended approach

After creating the user, add role configuration:

   await client.exec({
     query: "CREATE USER IF NOT EXISTS limited_user IDENTIFIED WITH sha256_password BY {clickhouseExternalPassword:String}",
     query_params: { clickhouseExternalPassword },
   });
+  // Configure role to allow custom settings in readonly mode
+  await client.exec({
+    query: "CREATE ROLE IF NOT EXISTS limited_analytics_role SETTINGS SQL_project_id CHANGEABLE_IN_READONLY, SQL_branch_id CHANGEABLE_IN_READONLY",
+  });
+  await client.exec({
+    query: "GRANT limited_analytics_role TO limited_user",
+  });
+  await client.exec({
+    query: "SET DEFAULT ROLE limited_analytics_role TO limited_user",
+  });

Also, verify that the ClickHouse server configuration includes custom_settings_prefixes = 'SQL_' in the config file or environment.

Based on learnings from previous reviews on analytics query route.

🧹 Nitpick comments (1)
apps/backend/scripts/clickhouse-migrations.ts (1)

12-12: TODO: Implement proper migration file system.

The inline SQL constants and sequential execution don't provide versioning, rollback capability, or idempotent re-runs. Consider implementing a proper migration system with:

  • Numbered migration files (e.g., 001_create_events_table.sql)
  • Migration tracking table to record applied migrations
  • Ability to roll back or skip already-applied migrations

Do you want me to generate a migration framework implementation or open an issue to track this enhancement?

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 136b25f and 3760836.

📒 Files selected for processing (1)
  • apps/backend/scripts/clickhouse-migrations.ts (1 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{ts,tsx}: Always add new E2E tests when changing the API or SDK interface
For blocking alerts and errors, never use toast; use alerts instead as they are less easily missed by the user
NEVER try-catch-all, NEVER void a promise, and NEVER .catch(console.error); use loading indicators and async callbacks instead, or use runAsynchronously/runAsynchronouslyWithAlert for error handling
Use ES6 maps instead of records wherever you can

Files:

  • apps/backend/scripts/clickhouse-migrations.ts
**/*.{ts,tsx,css}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{ts,tsx,css}: Keep hover/click transitions snappy and fast; avoid fade-in delays on hover. Apply transitions after action completion instead, like smooth fade-out when hover ends
Use hover-exit transitions instead of hover-enter transitions; for example, use 'transition-colors hover:transition-none' instead of fade-in on hover

Files:

  • apps/backend/scripts/clickhouse-migrations.ts
apps/**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

NEVER use Next.js dynamic functions if you can avoid them; prefer using client components and hooks like usePathname instead of await params to keep pages static

Files:

  • apps/backend/scripts/clickhouse-migrations.ts
{.env*,**/*.{ts,tsx,js}}

📄 CodeRabbit inference engine (AGENTS.md)

Prefix environment variables with STACK_ (or NEXT_PUBLIC_STACK_ if public) so changes are picked up by Turborepo and improves readability

Files:

  • apps/backend/scripts/clickhouse-migrations.ts
🧠 Learnings (2)
📚 Learning: 2025-12-04T18:03:49.984Z
Learnt from: CR
Repo: stack-auth/stack-auth PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-04T18:03:49.984Z
Learning: Applies to packages/stack-shared/src/config/schema.ts : Whenever making backwards-incompatible changes to the config schema, update the migration functions in packages/stack-shared/src/config/schema.ts

Applied to files:

  • apps/backend/scripts/clickhouse-migrations.ts
📚 Learning: 2025-12-17T01:23:15.483Z
Learnt from: N2D4
Repo: stack-auth/stack-auth PR: 1032
File: apps/backend/src/app/api/latest/analytics/query/route.tsx:20-20
Timestamp: 2025-12-17T01:23:15.483Z
Learning: In apps/backend/src/app/api/latest/analytics/query/route.tsx, the include_all_branches field controls which ClickHouse user type is used: when true, use "admin" authType for access to all branches; when false (default), use "external" authType for limited/filtered branch access.

Applied to files:

  • apps/backend/scripts/clickhouse-migrations.ts
🧬 Code graph analysis (1)
apps/backend/scripts/clickhouse-migrations.ts (2)
apps/backend/src/lib/clickhouse.tsx (1)
  • clickhouseAdminClient (21-21)
packages/stack-shared/src/utils/env.tsx (1)
  • getEnvVariable (16-58)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (13)
  • GitHub Check: Vercel Agent Review
  • GitHub Check: setup-tests
  • GitHub Check: E2E Tests (Node 22.x, Freestyle prod)
  • GitHub Check: check_prisma_migrations (22.x)
  • GitHub Check: E2E Tests (Node 22.x, Freestyle mock)
  • GitHub Check: restart-dev-and-test-with-custom-base-port
  • GitHub Check: setup-tests-with-custom-base-port
  • GitHub Check: build (22.x)
  • GitHub Check: lint_and_build (latest)
  • GitHub Check: build (22.x)
  • GitHub Check: restart-dev-and-test
  • GitHub Check: docker
  • GitHub Check: all-good
🔇 Additional comments (4)
apps/backend/scripts/clickhouse-migrations.ts (4)

16-20: LGTM: Permission revocation and grant logic is sound.

The double revocation (REVOKE ALL PRIVILEGES followed by REVOKE ALL) ensures a clean slate, and the grant on default.events (the view) correctly restricts access to the filtered dataset through the row policy.


31-49: LGTM: Events table schema is well-designed.

The table structure is appropriate with:

  • LowCardinality(String) for event_type (good for cardinality optimization)
  • DateTime64(3, 'UTC') for timestamps with millisecond precision
  • JSON for flexible event data
  • Monthly partitioning by event_at for efficient data management
  • Sort order (project_id, branch_id, event_at) optimized for the row policy filters

13-15: LGTM: Database, table, and view creation order is correct.

The execution sequence properly handles dependencies:

  1. Creates analytics_internal database first
  2. Creates analytics_internal.events table
  3. Creates default.events view pointing to the table
  4. Applies row policy and grants (lines 21-26)

51-57: Row policies may not apply correctly through SQL SECURITY DEFINER views.

Row policies are applied when a user performs a SELECT query and restrict which rows are returned, but the interaction with DEFINER views is not straightforward. ClickHouse has known issues where DEFINER and SQL SECURITY DEFINER settings on certain view types are ignored in favor of invoker permissions. Ensure that the analytics_internal.events table has a row policy defined and test that it correctly filters results when queried through this DEFINER view with a limited_user account. Consider adding E2E tests to verify row isolation works as expected.

body: JSON.stringify({
query: options.query,
params: options.params ?? {},
timeout_ms: options.timeout_ms ?? 1000,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
timeout_ms: options.timeout_ms ?? 1000,
timeout_ms: options.timeout_ms ?? 10_000,

The default timeout_ms value (1000ms) in the queryAnalytics method doesn't match the backend route's default (10_000ms), causing queries to timeout prematurely when no explicit timeout is specified.

View Details

Analysis

Client-side timeout default (1000ms) doesn't match backend timeout default (10000ms)

What fails: StackAdminInterface.queryAnalytics() applies a 1000ms timeout when no explicit timeout is specified, but the backend API /api/v1/internal/analytics/query declares a default of 10000ms. This causes queries to timeout prematurely.

How to reproduce:

  1. Open the query analytics dashboard page at /projects/[projectId]/query-analytics
  2. Run any query that takes longer than 1 second but less than 10 seconds (e.g., an aggregation over a large dataset)
  3. The query fails with a timeout error

What happens: Query fails with "Timeout exceeded: elapsed X ms, maximum: 1000 ms" error.

Expected behavior: Query should timeout after 10 seconds per the backend's schema definition: timeout_ms: yupNumber().integer().min(1_000).default(10_000) (route.ts line 20)

Details:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants