Infrastructure and Deployment
All Eventuall infrastructure is managed through Terraform, which provisions Cloudflare resources (databases, storage, tunnels, DNS) dynamically per environment. This means every developer gets isolated infrastructure — your local changes never affect another developer's setup.
What Terraform Manages
When you run ./scripts/setup.sh, Terraform creates:
| Resource | Count | Purpose |
|---|---|---|
| D1 Databases | 3 | Webapp, Workers, and Statuspage databases |
| KV Namespaces | 2 | LiveKit metadata cache, event cache |
| R2 Buckets | 2 | Image uploads, general cache |
| Queues | 2 | Cloudcaster command queue + dead letter queue |
| Cloudflare Tunnel | 1 | Public URL for your local dev server |
| DNS Records | Up to 6 | Subdomains for webapp, worker, logs, MCP, Claude proxy, statuspage |
| Wrangler configs | 3 | Generated wrangler.jsonc for webapp, workers, statuspage |
| Environment files | Multiple | .env, .env.ports, start-dev.sh |
Everything is namespaced by an environment ID derived from your Git branch name and a random suffix. Two developers working on different branches will get completely separate resources.
Environment Types
The platform supports three environment types, each with different infrastructure behavior:
Local Development
When you run ./scripts/setup.sh on your machine, Terraform detects you're in a local environment. It generates a unique environment ID from your branch name (sanitized, lowercase, max 20 characters) plus a timestamp and random hex. All resources are named with this ID as a suffix.
Your local dev server runs behind a Cloudflare Tunnel, which gives you a public URL like your-branch.eventuall.live. This is necessary because LiveKit webhooks, Twilio callbacks, and OAuth redirects all need a publicly accessible URL.
Database migrations are applied automatically as part of the Terraform setup.
Git Worktrees (Claude Code)
Git worktrees work exactly like local development but with complete isolation per worktree. Each worktree gets its own Terraform state, its own tunnel, and its own set of resources. This allows parallel development on multiple branches without conflicts.
The worktree also allocates a "preview ID" from a shared pool via an HTTP API. This preview ID is used for LiveKit and Mux resource allocation, ensuring each environment gets its own set of API credentials.
Database migrations are applied automatically during setup.
Preview and Production
Preview and production environments use pre-existing resources that are not managed by Terraform's dynamic provisioning. The database IDs, KV namespace IDs, and other resource identifiers are fixed.
Pitfall: Database migrations for preview and production must be applied manually. This is the most common source of deployment issues. If you add a new column to the schema and deploy without migrating, queries that reference that column will fail at runtime.
To apply migrations manually:
cd apps/webapp
pnpm wrangler d1 migrations apply <database-name> --remote
cd apps/workers
pnpm wrangler d1 migrations apply <database-name> --remote
Architect's Note: There is no automated migration step in the CI/CD pipeline. This is intentional — D1 migrations are destructive (they can't be rolled back) and the team prefers human verification before applying to shared environments. A future improvement would be a pre-deploy check that fails the build if unapplied migrations exist.
Setup Process in Detail
Here's what happens when you run ./scripts/setup.sh:
graph TD
A[Run setup.sh] --> B{Already configured?}
B -->|Yes| C[Reuse existing config]
B -->|No| D[Check prerequisites]
D --> E[Load secrets from Doppler]
E --> F[Generate worktree ID]
F --> G[Run Terraform]
G --> H[Create D1 databases]
G --> I[Create KV namespaces]
G --> J[Create R2 buckets]
G --> K[Create queues]
G --> L[Create Cloudflare tunnel]
G --> M[Create DNS records]
H --> N[Apply database migrations]
G --> O[Generate wrangler.jsonc files]
G --> P[Generate .env files]
G --> Q[Generate start-dev.sh]
N --> R[Run post-setup script]
R --> S[Generate TypeScript types]
S --> T[Setup complete]
Prerequisites
The setup script checks for:
cloudflared— Cloudflare tunnel clientjq— JSON processorgit— version control- A valid Doppler token (for secrets)
Secrets Management with Doppler
All API keys, OAuth credentials, and service tokens are stored in Doppler and fetched during Terraform initialization. The Doppler token itself is stored in either the TF_VAR_doppler_token environment variable or a .doppler.token file.
Terraform reads secrets from Doppler and injects them into the generated configuration files. You never store secrets in the repository — they flow from Doppler through Terraform into your local environment.
Environment variable overrides are supported with the EVENTUALL_ prefix. For example, EVENTUALL_AUTH_SECRET=xxx overrides the Doppler value.
Wrangler Configuration
Terraform generates wrangler.jsonc files from templates in scripts/templates/. These files configure how Cloudflare's wrangler CLI deploys the application:
- Webapp (
apps/webapp/wrangler.jsonc) — D1 database binding, KV namespace for LiveKit metadata, R2 buckets for images and cache, service URLs - Workers (
apps/workers/wrangler.jsonc) — D1 database binding, KV namespace for event cache, queue bindings, Durable Object definitions and migration tags - Statuspage (
apps/statuspage/wrangler.jsonc) — D1 database binding, service URLs
Pitfall: The
wrangler.jsoncfiles are gitignored because they contain environment-specific values (database IDs, KV namespace IDs). If they're missing, run./scripts/setup.shto regenerate them. Never commit these files.
Cloudflare Tunnel
Each environment gets its own Cloudflare Zero Trust tunnel. The tunnel creates a secure, encrypted connection from your local machine to Cloudflare's network, and DNS records point your subdomain to the tunnel endpoint.
The tunnel configuration maps subdomains to local ports:
| Subdomain | Destination | Purpose |
|---|---|---|
{env}.eventuall.live |
localhost:{webapp_port} |
Next.js webapp |
{env}-worker.eventuall.live |
localhost:{worker_port} |
Cloudflare Workers |
{env}-logs.eventuall.live |
localhost:8686 |
Vector log streaming (optional) |
{env}-mcp.eventuall.live |
localhost:3333 |
MCP server (optional) |
{env}-status.eventuall.live |
localhost:{statuspage_port} |
Status page (optional) |
Ports are dynamically allocated by the port allocator module to avoid collisions when multiple environments run on the same machine.
Docker Containers (Remote Preview)
For remote preview environments (used by Claude Code and CI), the application runs in a Docker container. The Dockerfile (Dockerfile at the project root) builds a container with:
- Node.js 22
- Terraform 1.9.5
- Cloudflared
- Doppler CLI
- Vector (log streaming)
- Python (FastAPI metadata server)
- Supervisor (process management)
The container exposes port 9999 for a metadata API and port 8686 for WebSocket log streaming. Inside, it runs the full setup process (Terraform, migrations, tunnel) and then starts the dev server.
Architect's Note: The Docker container runs as a non-root user (
claude-user) withCAP_NET_BIND_SERVICEcapability for binding to privileged ports. Memory is capped at 1024MB viaNODE_OPTIONS. If the container runs out of memory during development, the Next.js build will fail silently — checkdocker logsfor OOM errors.
Teardown
To destroy all resources created by Terraform:
pnpm teardown
# This runs: cd terraform && terraform destroy -auto-approve
This removes all D1 databases, KV namespaces, R2 buckets, queues, the tunnel, and DNS records. Data in D1 and R2 is permanently deleted.
If the environment used a preview ID from the pool, it's released back automatically during teardown.
Pitfall: R2 buckets must be empty before deletion. Terraform handles this by running MinIO Client (
mc) to empty buckets before destroying them. Ifmcisn't installed, the destroy will still succeed but R2 buckets may be left behind as orphans.
Common Infrastructure Tasks
Adding a New Environment Variable
- Add the variable to Doppler
- Reference it in
terraform/main.tfwhere wrangler configs are generated - Add it to the appropriate template in
scripts/templates/ - Run
./scripts/setup.shto regenerate configs
Adding a New D1 Table
- Modify the schema in
apps/webapp/src/server/db/schema.ts - Run
cd apps/webapp && pnpm generateto create a migration - Run
pnpm migrate:localto apply locally - Remember: apply manually to preview/production before deploying
Debugging Infrastructure Issues
Check the Terraform state:
cd terraform
terraform show # Current state
terraform plan # Planned changes
terraform output # Output values (URLs, IDs)
Check tunnel status:
cloudflared tunnel info <tunnel-name>
Verify D1 database:
npx wrangler d1 execute <database-name> --command "SELECT count(*) FROM users" --local