Backend Architecture
The Eventuall backend runs on Cloudflare Workers using the Hono framework for HTTP routing and Durable Objects for stateful logic. The workers application is separate from the Next.js webapp — it handles real-time features, video streaming orchestration, and WebSocket connections.
Cloudflare Workers and Hono
Cloudflare Workers are serverless functions that run at the edge. Each request spins up a worker, it processes the request, and it shuts down. There's no persistent server process. The Hono framework provides Express-like routing on top of this model.
The main router is defined in apps/workers/src/routes/index.ts. It mounts three route groups:
/cloudcaster— recording session management (start, stop, status, clips)/webhooks— incoming webhooks from LiveKit/event— event-related operations
Every request goes through CORS middleware first, then hits the appropriate route handler.
Request → CORS middleware → Route handler → Durable Object (if needed) → Response
How Webapp Talks to Workers
The Next.js webapp communicates with workers through a typed Hono client. In the tRPC context (apps/webapp/src/server/api/trpc.ts), a Hono RPC client is initialized pointing to the workers URL. This gives the webapp type-safe access to worker endpoints — if you change a worker route's input type, TypeScript will catch mismatches at compile time.
The workers URL comes from the PARTYKIT_HOST environment variable, which is the Cloudflare tunnel subdomain assigned to this environment.
Durable Objects
Durable Objects are the key architectural decision in the backend. Unlike regular Workers (which are stateless), Durable Objects are single-threaded, stateful instances that persist across requests. Each Durable Object has a unique ID and its own storage.
Eventuall uses four Durable Objects:
Event (PartyServer)
The Event DO manages real-time presence and state for a single event room. It uses the PartyKit framework (built on top of Durable Objects) with WebSocket hibernation for efficiency.
What it does:
- Tracks which users are connected via WebSocket
- Maintains presence state (online, away, offline) using a heartbeat mechanism
- Broadcasts "snapshots" of the event state to all connected clients
- Logs stage events (when participants enter/exit the live stage) during recordings
- Queries the CloudcasterCoordinator for active recording session data
Presence tracking: Clients send heartbeat messages every few seconds. If a heartbeat isn't received within 90 seconds, the user is marked as "away." A scheduled alarm runs every 30 seconds to clean up stale connections and remove users who haven't been seen in 24 hours.
Hibernation: The hibernate: true option means the DO can go to sleep when no messages are being exchanged. This dramatically reduces costs because you're not paying for idle WebSocket connections.
Architect's Note: Hibernation changes the programming model. You can't rely on in-memory state persisting between messages. Any state that needs to survive hibernation must be stored in the DO's durable storage (key-value store) or reconstructed from it.
Queue (PartyServer)
The Queue DO manages collaborative queuing for events using Yjs, a CRDT (Conflict-free Replicated Data Type) library. It maintains pending and live queues of participants, allowing concurrent edits without conflicts.
Why Yjs? Multiple moderators might reorder participants simultaneously. CRDTs guarantee that all clients converge to the same state regardless of the order operations arrive. This is the same technology that powers collaborative text editors like Google Docs.
CloudcasterWorkflow
The CloudcasterWorkflow DO orchestrates the recording pipeline. Starting a recording involves multiple external services (Mux for streaming, LiveKit for video egress) that each take time to initialize. The workflow coordinates seven sequential steps: creating a Mux live stream, waiting for it to be ready, starting LiveKit web egress to capture a composite page and stream RTMP to Mux, and updating the Event DO so connected clients know recording has started.
The entire start sequence uses blockConcurrencyWhile() for atomicity — either all services are created or the workflow cleans up and reports failure.
CloudcasterCoordinator
The CloudcasterCoordinator is a lock manager that prevents duplicate recording sessions. Only one recording can be active per event/room at a time. It acquires and releases locks, runs health checks every 30 seconds, enforces a 12-hour maximum session duration, and cleans up orphaned sessions where the workflow crashed.
Architect's Note: The coordinator + workflow split separates concerns: the coordinator manages "who is recording" (lock management), while the workflow manages "how to record" (service orchestration). This prevents a workflow crash from leaving orphan locks.
For a detailed walkthrough of the entire recording pipeline — including the composite page, Mux integration, LiveKit egress configuration, participant clip generation, and error handling — see Compositor and Recording Architecture.
Durable Object Communication
DOs communicate with each other through direct method calls (stubs), not HTTP fetch. When the CloudcasterWorkflow needs to update the Event DO, it calls broadcastNewSnapshot() on the Event DO's stub. This is faster and type-safe.
CloudcasterWorkflow → CloudcasterCoordinator (lock management)
CloudcasterWorkflow → Event DO (broadcast recording status)
Event DO → CloudcasterCoordinator (query session info)
Queue System
Cloudflare Queues handle asynchronous command processing. The cloudcaster command queue receives messages (start/stop recording commands) and routes them to the appropriate Durable Object. A dead letter queue (DLQ) catches messages that fail processing after retries.
Webhook Handling
The /webhooks/livekit endpoint receives participant join/leave events from LiveKit. It validates the JWT signature, then notifies the Event DO to update presence state. Other LiveKit events (egress status, room lifecycle) were previously handled via webhooks but have been migrated to polling for reliability.
Pitfall: LiveKit webhooks arrive with JWT authentication. The webhook handler validates the
issclaim matches the configured signing key. If this validation fails (e.g., key rotation), all webhooks silently fail. The polling approach in the workflow is immune to this.
Environment Bindings
Workers access Cloudflare services through environment bindings defined in wrangler.jsonc:
| Binding | Type | Purpose |
|---|---|---|
DB |
D1 Database | Workers database (sessions, logs, clips) |
EVENT |
Durable Object | Event real-time state |
QUEUE |
Durable Object | Collaborative queuing |
CLOUDCASTER_WORKFLOW |
Durable Object | Recording orchestration |
CLOUDCASTER_COORDINATOR |
Durable Object | Session lock management |
CLOUDCASTER_COMMAND_QUEUE |
Queue | Async command processing |
EVENT_CACHE |
KV Namespace | Fast key-value caching |
These bindings are auto-generated by Terraform and injected into the wrangler config. You don't configure them manually.