Operate
Deployment
Pitchbar is built for Laravel Cloud as the primary target โ the stack is FrankenPHP + Postgres + Redis, all of which Laravel Cloud provisions natively. Self-hosting is supported but the operator owns more pieces.
Laravel Cloud
infra/cloud.yaml in the repo describes the environments
and processes. The high-level shape:
- Region:
us-eastby default. Choose for proximity to your customers. - App process: FrankenPHP, Octane mode. Auto-scaled.
- Worker process: Horizon, dedicated to
crawl,index, anddefaultqueues. - Reverb process: persistent WebSocket server.
- Postgres: 16, with daily backups.
- Redis: 7, persistent.
Environments
| Env | Purpose |
|---|---|
| preview | Per-PR ephemeral environments. Auto-spun on PR open, torn down on merge / close. |
| staging | Long-lived. Mirrors production config. Used for QA and pre-release verification. |
| production | The customer-facing environment. Releases gated on green CI + manual deploy. |
Compute sizing for v1 launch
Starting point. Adjust based on traffic.
| Component | Size | Why |
|---|---|---|
| App (Octane) | 2 instances ร 2 vCPU / 2 GB | Hot path is mostly I/O-bound on LLM streaming. Two instances for HA. |
| Worker (Horizon) | 2 instances ร 2 vCPU / 2 GB | Indexing throughput. Scale on queue depth. |
| Reverb | 1 instance ร 1 vCPU / 1 GB | WebSocket, sticky. |
| Postgres | 2 vCPU / 4 GB / 50 GB SSD | Comfortable until ~10M messages. |
| Redis | 1 GB | Sessions, queue, hot caches. |
Domains
You typically need:
- Primary domain โ
app.pitchbar.comfor the customer / admin app. - Widget domain โ same or a separate
cdn.pitchbar.comserving/widget/widget.js. The bundle has a content-hash query param, so aggressive caching is safe. - Reverb domain โ
realtime.pitchbar.comif you split the WebSocket process onto its own host.
CI/CD
GitHub Actions workflows under .github/workflows/:
tests.ymlโ PHP setup + Composer + Pest suite (with fakes, no network).lint.ymlโ Pint, ESLint, TypeScripttsc --noEmit.widget.ymlโ widget bundle build + size budget check.
Deploys are gated on green CI; the actual deploy step is configured on Laravel Cloud (or your hosting equivalent), not in the workflow files.
Migrations
Laravel Cloud runs php artisan migrate --force on every
deploy. Migrations should be backwards-compatible โ a deploy that adds
a NOT NULL column to a populated table needs a two-step:
- Deploy 1: add the column nullable, backfill, app code starts writing it.
- Deploy 2: change the column to NOT NULL.
Same goes for renames and drops โ never destructive in a single deploy.
Backups
- Postgres โ daily snapshots, retained 30 days. Point-in-time recovery enabled.
- Vector store โ Cloudflare Vectorize / Qdrant don't have a built-in backup; rebuild from the
chunkstable by re-dispatchingIndexDocumentJobfor every document.php artisan pitchbar:audit-vectorsreports drift between the chunks table and the live vector store; if you need to repair, dispatch the job per row. - R2 / object storage โ versioning enabled.
- App secrets โ Laravel Cloud's secret store is encrypted; back up
APP_KEYseparately (it's the master for app_settings encryption).
Rollback
Laravel Cloud keeps the previous release for instant rollback. For schema-incompatible rollbacks (rare), restore from the latest snapshot.
Self-hosting
The same Docker setup that powers docker-compose.yml works
for production with a few additions:
- Reverse proxy (Caddy or Nginx) terminating TLS in front of FrankenPHP.
- Managed Postgres + Redis (or self-managed with HA replicas).
- Horizon as a long-running service, monitored by systemd / a process manager.
- Reverb as its own process.
- Sentry / OTEL collector running locally or pointing at a SaaS.
The composer run dev shortcut starts everything locally
(Octane, queue worker, Reverb, vite) for development.
Queue worker tick from a Cloudflare Worker cron
When in-cluster scheduling isn't available (cPanel shared hosting,
DIY VPS without systemd, Laravel Cloud's preview environments), an
external Cloudflare Worker can drive the queue every 60 seconds by
POSTing /api/v1/internal/queue-tick with the
INTERNAL_QUEUE_TOKEN bearer secret. The endpoint
invokes php artisan queue:tick which spawns one
queue:work --once --stop-when-empty pass with these
defaults:
--max-time=55โ the loop exits before the 60-second tick boundary so consecutive ticks don't pile up.--job-timeout=120โ individual jobs (mostly CrawlPageJob / IndexDocumentJob) get a 2-minute ceiling.- Queues processed:
crawl,index,default.
Build + deploy the Worker via
php artisan pitchbar:deploy-cron-worker. The Worker
body is templated from WorkerDeployer and ships with
the tick parameters baked in. Rotate
INTERNAL_QUEUE_TOKEN after deploy.
Crawler reliability
CrawlPageJob retries up to 3 times
with backoff [30, 90, 180] seconds. The retry path
branches on failure class:
- Rate-limit (429) โ
release(60)without burning a retry slot. Every fan-out page tends to hit the same 429 wave; the shared wait is productive. - Permanent failures โ curl DNS errors (6, 7), connection refused, malformed URL, HTTP 400 / 401 / 403 / 404 / 410 / 451 โ call
$this->fail()immediately. Without this, every dead URL burned the full 3-retry budget and produced a genericMaxAttemptsExceededExceptionin the logs. - Transient (5xx, network blip) โ normal retry with backoff.
Per-job timeout is 90 seconds; failOnTimeout=true so
a SIGTERM on timeout still runs the failed() callback
and flips the Source row to failed with a
customer-readable error.