Database Architecture
Eventuall uses Cloudflare D1, a serverless SQLite database, accessed through Drizzle ORM. There are two separate databases: one for the webapp (users, events, access control, orders) and one for the workers (recording sessions, stage events, clips).
Why Two Databases?
The webapp and workers have different data lifecycles. User accounts, events, and permissions are long-lived and grow over time. Recording sessions, stage logs, and participant clips are ephemeral — created during a live event and rarely modified afterward. Separating them allows independent scaling and keeps each database focused.
| Aspect | Webapp Database | Workers Database |
|---|---|---|
| Data | Users, events, rooms, RBAC, orders, invitations | Recording sessions, stage events, clips, logs |
| Growth | Steady (new users, events) | Bursty (during live events) |
| Tables | ~20 tables | 4 tables |
| Schema file | apps/webapp/src/server/db/schema.ts |
apps/workers/src/drizzle/schema.ts |
| Migrations | apps/webapp/migrations/ (56 files) |
apps/workers/migrations/ (7 files) |
Schema Overview
Webapp: Core Tables
Users and Authentication
The users table stores profile data (name, email, pronouns, location, birthdate). Authentication is handled through userProviders (OAuth links) and sessions (active login sessions). There are also mcpLoginTokens for one-time login via the MCP tool and mcpApiKeys for programmatic API access.
Accounts and Ownership
Users belong to accounts (organizations) through the accountOwnership table. An account can have multiple owners, and a user can own multiple accounts. When a new user signs up, the system automatically creates a personal account for them.
Events, Rooms, and Access
An event belongs to an account and contains one or more rooms. Access is controlled through accessGrants, which link a user to a specific event/room with a role. The RBAC system uses permissions, roles, roleToPermissions, and rolesInheritances to build a flexible permission hierarchy.
erDiagram
users ||--o{ accountOwnership : owns
accounts ||--o{ accountOwnership : "owned by"
accounts ||--o{ events : hosts
events ||--o{ rooms : contains
users ||--o{ accessGrants : "has access"
rooms ||--o{ accessGrants : "grants access to"
roles ||--o{ accessGrants : "assigned via"
roles ||--o{ roleToPermissions : "has"
permissions ||--o{ roleToPermissions : "granted to"
Orders and E-Commerce
The orders system tracks ticket purchases across multiple retailers (Shopify, Eventbrite, Stripe, manual). externalSkuMappings map external product IDs to internal SKUs on a per-event basis. roomAccessTickets link a purchased ticket to room access.
Architect's Note: External SKU mappings are scoped per event. The same Shopify product can map to different internal rooms depending on which event it's associated with. This was designed for the scenario where a recurring event series reuses the same Shopify store but has different room configurations each time.
Workers: Recording Tables
The workers database has four tables:
cloudcasterSessions— one row per recording session, tracking the Mux stream ID, LiveKit egress ID, recording status, and playback URLcloudcasterLogs— structured logs from the coordinator, workflow, and service layersstageEvents— records when participants enter and exit the live stage, with a time offset from the recording startparticipantClips— generated video clips for individual participants, created from stage event pairs
Drizzle ORM
Drizzle provides type-safe database access with a SQL-like API. The schema is defined in TypeScript, and Drizzle infers all query types automatically.
Connecting to D1
The webapp gets its database connection through getCloudflareContext():
// apps/webapp/src/server/db/index.ts
import { getCloudflareContext } from "@opennextjs/cloudflare";
import { drizzle } from "drizzle-orm/d1";
export const getDB = () => {
const ctx = getCloudflareContext();
return drizzle(ctx.env.DB, { schema });
};
The workers get their connection directly from the environment bindings:
// apps/workers/src/drizzle/db.ts
export const getDB = (env: { DB: D1Database }) => {
return drizzle(env.DB, { schema });
};
Pitfall: Do not cache the database client with
React.cache()or store it in a module-level variable. On Cloudflare, each request gets a fresh D1 binding, and reusing a stale connection will cause errors. The codebase explicitly callsgetDB()per-request.
Query Patterns
Drizzle supports two query styles. The relational query API (.query.) is best for reading data with relationships:
const event = await db.query.events.findFirst({
where: eq(events.id, eventId),
with: {
rooms: true,
tickets: { orderBy: [tickets.price] },
},
});
The SQL-like API is better for inserts, updates, and complex queries:
const [created] = await db
.insert(events)
.values({ name: "My Event", accountId, status: "draft" })
.returning();
Transactions
D1 supports transactions through Drizzle's db.transaction() API. The CloudcasterWorkflow uses transactions to atomically create a session and its associated records:
await db.transaction(async (tx) => {
const [session] = await tx.insert(cloudcasterSessions).values({...}).returning();
await tx.insert(stageEvents).values({...});
});
Pitfall: D1 transactions are limited to a single database. You cannot span a transaction across the webapp and workers databases. If you need cross-database consistency, design operations to be idempotent and use compensating actions on failure.
Migrations
Drizzle Kit generates SQL migration files by comparing your TypeScript schema against the last snapshot. Each migration is a .sql file in the migrations/ directory.
Generating Migrations
When you change the schema (add a table, modify a column), run:
cd apps/webapp # or apps/workers
pnpm generate # generates a new .sql migration file
Drizzle compares the current schema against migrations/meta/ snapshots and produces a diff. The migration file gets an auto-generated name like 0055_add_event_description.sql.
Applying Migrations Locally
pnpm migrate:local
This runs wrangler d1 migrations apply against your local D1 instance. For local development and worktrees, Terraform runs this automatically during ./scripts/setup.sh.
Applying Migrations to Remote Environments
Pitfall: Migrations for preview and production environments must be applied manually. Terraform only auto-applies migrations when it creates new D1 databases. For existing environments, you must run:
cd apps/webapp
pnpm wrangler d1 migrations apply <database-name> --remote
This is a common source of errors: you deploy new code that references a column that doesn't exist yet because the migration wasn't applied. Always apply migrations before deploying schema-dependent code changes.
Architect's Note: There is currently no CI/CD step that auto-applies migrations to preview or production. This is intentional — D1 migrations are irreversible and we want a human to verify them before applying to shared environments. Consider adding a migration check to the PR pipeline that warns when unapplied migrations exist.
Seed Data
Custom seed migrations are generated with pnpm generate:seed. The existing seed file (migrations/seeds/account_user.sql) creates test accounts (Eventuall, Momentus, SG Blaise) and links them to development team email addresses.
Schema Conventions
The codebase follows consistent conventions:
- IDs:
text("id").primaryKey().$defaultFn(() => createId())— CUID2 strings, not auto-incrementing integers - Timestamps:
integer("created_at", { mode: "timestamp" })— stored as Unix timestamps in SQLite - Enums:
text("status", { enum: ["draft", "active", "archive"] })— inline string enums, not separate tables - Foreign keys:
.references(() => table.column, { onDelete: "cascade" })— with explicit cascade behavior - Indexes: Defined in the table's third argument, named with a
_idxsuffix - Relations: Defined separately using Drizzle's
relations()function for type-safe joins
Views
The schema includes a SQL view eventParticipantsView that joins users and pending profiles with their access grants, roles, and room information. This simplifies participant queries by pre-computing the join across multiple tables.
Architect's Note: D1 does not support materialized views. The
eventParticipantsViewis a regular view, meaning it executes the underlying join on every query. For frequently accessed participant lists, consider caching the result in KV with a short TTL.