Infrastructure and Deployment

All Eventuall infrastructure is managed through Terraform, which provisions Cloudflare resources (databases, storage, tunnels, DNS) dynamically per environment. This means every developer gets isolated infrastructure — your local changes never affect another developer's setup.

What Terraform Manages

When you run ./scripts/setup.sh, Terraform creates:

Resource Count Purpose
D1 Databases 3 Webapp, Workers, and Statuspage databases
KV Namespaces 2 LiveKit metadata cache, event cache
R2 Buckets 2 Image uploads, general cache
Queues 2 Cloudcaster command queue + dead letter queue
Cloudflare Tunnel 1 Public URL for your local dev server
DNS Records Up to 6 Subdomains for webapp, worker, logs, MCP, Claude proxy, statuspage
Wrangler configs 3 Generated wrangler.jsonc for webapp, workers, statuspage
Environment files Multiple .env, .env.ports, start-dev.sh

Everything is namespaced by an environment ID derived from your Git branch name and a random suffix. Two developers working on different branches will get completely separate resources.

Environment Types

The platform supports three environment types, each with different infrastructure behavior:

Local Development

When you run ./scripts/setup.sh on your machine, Terraform detects you're in a local environment. It generates a unique environment ID from your branch name (sanitized, lowercase, max 20 characters) plus a timestamp and random hex. All resources are named with this ID as a suffix.

Your local dev server runs behind a Cloudflare Tunnel, which gives you a public URL like your-branch.eventuall.live. This is necessary because LiveKit webhooks, Twilio callbacks, and OAuth redirects all need a publicly accessible URL.

Database migrations are applied automatically as part of the Terraform setup.

Git Worktrees (Claude Code)

Git worktrees work exactly like local development but with complete isolation per worktree. Each worktree gets its own Terraform state, its own tunnel, and its own set of resources. This allows parallel development on multiple branches without conflicts.

The worktree also allocates a "preview ID" from a shared pool via an HTTP API. This preview ID is used for LiveKit and Mux resource allocation, ensuring each environment gets its own set of API credentials.

Database migrations are applied automatically during setup.

Preview and Production

Preview and production environments use pre-existing resources that are not managed by Terraform's dynamic provisioning. The database IDs, KV namespace IDs, and other resource identifiers are fixed.

Pitfall: Database migrations for preview and production must be applied manually. This is the most common source of deployment issues. If you add a new column to the schema and deploy without migrating, queries that reference that column will fail at runtime.

To apply migrations manually:

cd apps/webapp
pnpm wrangler d1 migrations apply <database-name> --remote

cd apps/workers
pnpm wrangler d1 migrations apply <database-name> --remote

Architect's Note: There is no automated migration step in the CI/CD pipeline. This is intentional — D1 migrations are destructive (they can't be rolled back) and the team prefers human verification before applying to shared environments. A future improvement would be a pre-deploy check that fails the build if unapplied migrations exist.

Setup Process in Detail

Here's what happens when you run ./scripts/setup.sh:

graph TD
    A[Run setup.sh] --> B{Already configured?}
    B -->|Yes| C[Reuse existing config]
    B -->|No| D[Check prerequisites]
    D --> E[Load secrets from Doppler]
    E --> F[Generate worktree ID]
    F --> G[Run Terraform]
    G --> H[Create D1 databases]
    G --> I[Create KV namespaces]
    G --> J[Create R2 buckets]
    G --> K[Create queues]
    G --> L[Create Cloudflare tunnel]
    G --> M[Create DNS records]
    H --> N[Apply database migrations]
    G --> O[Generate wrangler.jsonc files]
    G --> P[Generate .env files]
    G --> Q[Generate start-dev.sh]
    N --> R[Run post-setup script]
    R --> S[Generate TypeScript types]
    S --> T[Setup complete]

Prerequisites

The setup script checks for:

  • cloudflared — Cloudflare tunnel client
  • jq — JSON processor
  • git — version control
  • A valid Doppler token (for secrets)

Secrets Management with Doppler

All API keys, OAuth credentials, and service tokens are stored in Doppler and fetched during Terraform initialization. The Doppler token itself is stored in either the TF_VAR_doppler_token environment variable or a .doppler.token file.

Terraform reads secrets from Doppler and injects them into the generated configuration files. You never store secrets in the repository — they flow from Doppler through Terraform into your local environment.

Environment variable overrides are supported with the EVENTUALL_ prefix. For example, EVENTUALL_AUTH_SECRET=xxx overrides the Doppler value.

Wrangler Configuration

Terraform generates wrangler.jsonc files from templates in scripts/templates/. These files configure how Cloudflare's wrangler CLI deploys the application:

  • Webapp (apps/webapp/wrangler.jsonc) — D1 database binding, KV namespace for LiveKit metadata, R2 buckets for images and cache, service URLs
  • Workers (apps/workers/wrangler.jsonc) — D1 database binding, KV namespace for event cache, queue bindings, Durable Object definitions and migration tags
  • Statuspage (apps/statuspage/wrangler.jsonc) — D1 database binding, service URLs

Pitfall: The wrangler.jsonc files are gitignored because they contain environment-specific values (database IDs, KV namespace IDs). If they're missing, run ./scripts/setup.sh to regenerate them. Never commit these files.

Cloudflare Tunnel

Each environment gets its own Cloudflare Zero Trust tunnel. The tunnel creates a secure, encrypted connection from your local machine to Cloudflare's network, and DNS records point your subdomain to the tunnel endpoint.

The tunnel configuration maps subdomains to local ports:

Subdomain Destination Purpose
{env}.eventuall.live localhost:{webapp_port} Next.js webapp
{env}-worker.eventuall.live localhost:{worker_port} Cloudflare Workers
{env}-logs.eventuall.live localhost:8686 Vector log streaming (optional)
{env}-mcp.eventuall.live localhost:3333 MCP server (optional)
{env}-status.eventuall.live localhost:{statuspage_port} Status page (optional)

Ports are dynamically allocated by the port allocator module to avoid collisions when multiple environments run on the same machine.

Docker Containers (Remote Preview)

For remote preview environments (used by Claude Code and CI), the application runs in a Docker container. The Dockerfile (Dockerfile at the project root) builds a container with:

  • Node.js 22
  • Terraform 1.9.5
  • Cloudflared
  • Doppler CLI
  • Vector (log streaming)
  • Python (FastAPI metadata server)
  • Supervisor (process management)

The container exposes port 9999 for a metadata API and port 8686 for WebSocket log streaming. Inside, it runs the full setup process (Terraform, migrations, tunnel) and then starts the dev server.

Architect's Note: The Docker container runs as a non-root user (claude-user) with CAP_NET_BIND_SERVICE capability for binding to privileged ports. Memory is capped at 1024MB via NODE_OPTIONS. If the container runs out of memory during development, the Next.js build will fail silently — check docker logs for OOM errors.

Teardown

To destroy all resources created by Terraform:

pnpm teardown
# This runs: cd terraform && terraform destroy -auto-approve

This removes all D1 databases, KV namespaces, R2 buckets, queues, the tunnel, and DNS records. Data in D1 and R2 is permanently deleted.

If the environment used a preview ID from the pool, it's released back automatically during teardown.

Pitfall: R2 buckets must be empty before deletion. Terraform handles this by running MinIO Client (mc) to empty buckets before destroying them. If mc isn't installed, the destroy will still succeed but R2 buckets may be left behind as orphans.

Common Infrastructure Tasks

Adding a New Environment Variable

  1. Add the variable to Doppler
  2. Reference it in terraform/main.tf where wrangler configs are generated
  3. Add it to the appropriate template in scripts/templates/
  4. Run ./scripts/setup.sh to regenerate configs

Adding a New D1 Table

  1. Modify the schema in apps/webapp/src/server/db/schema.ts
  2. Run cd apps/webapp && pnpm generate to create a migration
  3. Run pnpm migrate:local to apply locally
  4. Remember: apply manually to preview/production before deploying

Debugging Infrastructure Issues

Check the Terraform state:

cd terraform
terraform show          # Current state
terraform plan          # Planned changes
terraform output        # Output values (URLs, IDs)

Check tunnel status:

cloudflared tunnel info <tunnel-name>

Verify D1 database:

npx wrangler d1 execute <database-name> --command "SELECT count(*) FROM users" --local